var/home/core/zuul-output/0000755000175000017500000000000015116312313014521 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015116315151015470 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log0000644000000000000000002274402415116315142017701 0ustar rootrootDec 10 15:46:14 crc systemd[1]: Starting Kubernetes Kubelet... Dec 10 15:46:14 crc kubenswrapper[5114]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 10 15:46:14 crc kubenswrapper[5114]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 10 15:46:14 crc kubenswrapper[5114]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 10 15:46:14 crc kubenswrapper[5114]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 10 15:46:14 crc kubenswrapper[5114]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 10 15:46:14 crc kubenswrapper[5114]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.388871 5114 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.391966 5114 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.391989 5114 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.391995 5114 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.391999 5114 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392006 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392010 5114 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392015 5114 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392020 5114 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392025 5114 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392029 5114 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392035 5114 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392040 5114 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392044 5114 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392048 5114 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392054 5114 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392059 5114 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392063 5114 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392067 5114 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392071 5114 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392075 5114 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392079 5114 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392083 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392087 5114 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392092 5114 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392095 5114 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392099 5114 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392104 5114 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392108 5114 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392112 5114 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392117 5114 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392121 5114 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392125 5114 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392129 5114 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392133 5114 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392137 5114 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392141 5114 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392145 5114 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392149 5114 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392152 5114 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392158 5114 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392162 5114 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392166 5114 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392170 5114 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392175 5114 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392178 5114 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392182 5114 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392186 5114 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392191 5114 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392196 5114 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392200 5114 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392205 5114 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392209 5114 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392213 5114 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392218 5114 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392221 5114 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392225 5114 feature_gate.go:328] unrecognized feature gate: Example Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392228 5114 feature_gate.go:328] unrecognized feature gate: Example2 Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392232 5114 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392236 5114 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392239 5114 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392243 5114 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392247 5114 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392251 5114 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392254 5114 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392258 5114 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392262 5114 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392265 5114 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392286 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392291 5114 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392296 5114 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392300 5114 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392307 5114 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392314 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392319 5114 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392324 5114 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392328 5114 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392333 5114 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392337 5114 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392340 5114 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392345 5114 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392349 5114 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392353 5114 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392356 5114 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392360 5114 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392364 5114 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.392368 5114 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393068 5114 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393078 5114 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393083 5114 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393090 5114 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393095 5114 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393100 5114 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393103 5114 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393107 5114 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393111 5114 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393115 5114 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393119 5114 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393123 5114 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393127 5114 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393131 5114 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393135 5114 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393139 5114 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393144 5114 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393155 5114 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393159 5114 feature_gate.go:328] unrecognized feature gate: Example2 Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393163 5114 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393167 5114 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393171 5114 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393176 5114 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393180 5114 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393184 5114 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393188 5114 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393192 5114 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393196 5114 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393200 5114 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393205 5114 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393208 5114 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393211 5114 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393215 5114 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393219 5114 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393223 5114 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393228 5114 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393233 5114 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393240 5114 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393244 5114 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393249 5114 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393253 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393257 5114 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393261 5114 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393266 5114 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393291 5114 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393296 5114 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393300 5114 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393306 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393311 5114 feature_gate.go:328] unrecognized feature gate: Example Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393319 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393323 5114 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393327 5114 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393331 5114 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393335 5114 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393338 5114 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393344 5114 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393348 5114 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393351 5114 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393355 5114 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393360 5114 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393365 5114 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393369 5114 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393373 5114 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393377 5114 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393381 5114 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393386 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393390 5114 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393396 5114 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393401 5114 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393405 5114 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393411 5114 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393415 5114 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393419 5114 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393423 5114 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393427 5114 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393431 5114 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393436 5114 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393439 5114 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393443 5114 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393448 5114 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393452 5114 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393459 5114 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393463 5114 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393467 5114 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393472 5114 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.393476 5114 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393568 5114 flags.go:64] FLAG: --address="0.0.0.0" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393579 5114 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393590 5114 flags.go:64] FLAG: --anonymous-auth="true" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393597 5114 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393604 5114 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393610 5114 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393616 5114 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393623 5114 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393629 5114 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393633 5114 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393638 5114 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393644 5114 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393649 5114 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393654 5114 flags.go:64] FLAG: --cgroup-root="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393658 5114 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393662 5114 flags.go:64] FLAG: --client-ca-file="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393666 5114 flags.go:64] FLAG: --cloud-config="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393669 5114 flags.go:64] FLAG: --cloud-provider="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393673 5114 flags.go:64] FLAG: --cluster-dns="[]" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393677 5114 flags.go:64] FLAG: --cluster-domain="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393681 5114 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393684 5114 flags.go:64] FLAG: --config-dir="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393688 5114 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393692 5114 flags.go:64] FLAG: --container-log-max-files="5" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393697 5114 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393701 5114 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393705 5114 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393712 5114 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393716 5114 flags.go:64] FLAG: --contention-profiling="false" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393720 5114 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393724 5114 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393727 5114 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393731 5114 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393736 5114 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393740 5114 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393745 5114 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393748 5114 flags.go:64] FLAG: --enable-load-reader="false" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393752 5114 flags.go:64] FLAG: --enable-server="true" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393756 5114 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393761 5114 flags.go:64] FLAG: --event-burst="100" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393765 5114 flags.go:64] FLAG: --event-qps="50" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393769 5114 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393773 5114 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393777 5114 flags.go:64] FLAG: --eviction-hard="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393783 5114 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393788 5114 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393793 5114 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393798 5114 flags.go:64] FLAG: --eviction-soft="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393803 5114 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393807 5114 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393811 5114 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393815 5114 flags.go:64] FLAG: --experimental-mounter-path="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393819 5114 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393823 5114 flags.go:64] FLAG: --fail-swap-on="true" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393827 5114 flags.go:64] FLAG: --feature-gates="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393832 5114 flags.go:64] FLAG: --file-check-frequency="20s" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393835 5114 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393839 5114 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393843 5114 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393848 5114 flags.go:64] FLAG: --healthz-port="10248" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393852 5114 flags.go:64] FLAG: --help="false" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393856 5114 flags.go:64] FLAG: --hostname-override="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393860 5114 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393865 5114 flags.go:64] FLAG: --http-check-frequency="20s" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393869 5114 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393873 5114 flags.go:64] FLAG: --image-credential-provider-config="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393877 5114 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393881 5114 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393886 5114 flags.go:64] FLAG: --image-service-endpoint="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393893 5114 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393896 5114 flags.go:64] FLAG: --kube-api-burst="100" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393900 5114 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393904 5114 flags.go:64] FLAG: --kube-api-qps="50" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393907 5114 flags.go:64] FLAG: --kube-reserved="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393911 5114 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393914 5114 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393918 5114 flags.go:64] FLAG: --kubelet-cgroups="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393922 5114 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393925 5114 flags.go:64] FLAG: --lock-file="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393929 5114 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393932 5114 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393936 5114 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393942 5114 flags.go:64] FLAG: --log-json-split-stream="false" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393946 5114 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393950 5114 flags.go:64] FLAG: --log-text-split-stream="false" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393953 5114 flags.go:64] FLAG: --logging-format="text" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393957 5114 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393961 5114 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393965 5114 flags.go:64] FLAG: --manifest-url="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393968 5114 flags.go:64] FLAG: --manifest-url-header="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393973 5114 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393978 5114 flags.go:64] FLAG: --max-open-files="1000000" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393983 5114 flags.go:64] FLAG: --max-pods="110" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393987 5114 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393991 5114 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393995 5114 flags.go:64] FLAG: --memory-manager-policy="None" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.393998 5114 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394002 5114 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394005 5114 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394009 5114 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394019 5114 flags.go:64] FLAG: --node-status-max-images="50" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394025 5114 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394029 5114 flags.go:64] FLAG: --oom-score-adj="-999" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394033 5114 flags.go:64] FLAG: --pod-cidr="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394036 5114 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394042 5114 flags.go:64] FLAG: --pod-manifest-path="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394046 5114 flags.go:64] FLAG: --pod-max-pids="-1" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394050 5114 flags.go:64] FLAG: --pods-per-core="0" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394053 5114 flags.go:64] FLAG: --port="10250" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394057 5114 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394061 5114 flags.go:64] FLAG: --provider-id="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394064 5114 flags.go:64] FLAG: --qos-reserved="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394068 5114 flags.go:64] FLAG: --read-only-port="10255" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394072 5114 flags.go:64] FLAG: --register-node="true" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394075 5114 flags.go:64] FLAG: --register-schedulable="true" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394079 5114 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394086 5114 flags.go:64] FLAG: --registry-burst="10" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394089 5114 flags.go:64] FLAG: --registry-qps="5" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394093 5114 flags.go:64] FLAG: --reserved-cpus="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394096 5114 flags.go:64] FLAG: --reserved-memory="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394101 5114 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394105 5114 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394108 5114 flags.go:64] FLAG: --rotate-certificates="false" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394112 5114 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394116 5114 flags.go:64] FLAG: --runonce="false" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394120 5114 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394124 5114 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394128 5114 flags.go:64] FLAG: --seccomp-default="false" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394132 5114 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394136 5114 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394140 5114 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394144 5114 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394148 5114 flags.go:64] FLAG: --storage-driver-password="root" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394153 5114 flags.go:64] FLAG: --storage-driver-secure="false" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394158 5114 flags.go:64] FLAG: --storage-driver-table="stats" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394162 5114 flags.go:64] FLAG: --storage-driver-user="root" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394166 5114 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394171 5114 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394175 5114 flags.go:64] FLAG: --system-cgroups="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394180 5114 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394188 5114 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394192 5114 flags.go:64] FLAG: --tls-cert-file="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394197 5114 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394203 5114 flags.go:64] FLAG: --tls-min-version="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394207 5114 flags.go:64] FLAG: --tls-private-key-file="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394211 5114 flags.go:64] FLAG: --topology-manager-policy="none" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394215 5114 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394220 5114 flags.go:64] FLAG: --topology-manager-scope="container" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394225 5114 flags.go:64] FLAG: --v="2" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394231 5114 flags.go:64] FLAG: --version="false" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394236 5114 flags.go:64] FLAG: --vmodule="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394242 5114 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394246 5114 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394349 5114 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394354 5114 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394358 5114 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394363 5114 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394366 5114 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394370 5114 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394373 5114 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394376 5114 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394380 5114 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394383 5114 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394386 5114 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394389 5114 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394395 5114 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394399 5114 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394402 5114 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394406 5114 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394409 5114 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394412 5114 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394415 5114 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394418 5114 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394422 5114 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394425 5114 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394428 5114 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394431 5114 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394435 5114 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394438 5114 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394441 5114 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394444 5114 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394447 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394450 5114 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394453 5114 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394457 5114 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394460 5114 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394463 5114 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394466 5114 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394471 5114 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394475 5114 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394478 5114 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394482 5114 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394485 5114 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394489 5114 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394492 5114 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394495 5114 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394499 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394504 5114 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394507 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394510 5114 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394514 5114 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394517 5114 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394520 5114 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394523 5114 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394526 5114 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394529 5114 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394533 5114 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394536 5114 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394539 5114 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394542 5114 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394545 5114 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394550 5114 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394553 5114 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394557 5114 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394560 5114 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394563 5114 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394566 5114 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394570 5114 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394573 5114 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394577 5114 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394583 5114 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394586 5114 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394589 5114 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394592 5114 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394596 5114 feature_gate.go:328] unrecognized feature gate: Example2 Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394599 5114 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394602 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394605 5114 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394608 5114 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394613 5114 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394616 5114 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394619 5114 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394622 5114 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394626 5114 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394629 5114 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394633 5114 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394636 5114 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394640 5114 feature_gate.go:328] unrecognized feature gate: Example Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.394643 5114 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.394832 5114 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.404080 5114 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.404116 5114 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404191 5114 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404200 5114 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404205 5114 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404210 5114 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404215 5114 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404220 5114 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404224 5114 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404229 5114 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404233 5114 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404239 5114 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404244 5114 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404249 5114 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404254 5114 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404259 5114 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404264 5114 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404283 5114 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404290 5114 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404295 5114 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404314 5114 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404321 5114 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404326 5114 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404330 5114 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404334 5114 feature_gate.go:328] unrecognized feature gate: Example Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404339 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404343 5114 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404347 5114 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404351 5114 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404355 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404361 5114 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404365 5114 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404369 5114 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404374 5114 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404378 5114 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404382 5114 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404387 5114 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404391 5114 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404395 5114 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404399 5114 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404403 5114 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404408 5114 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404413 5114 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404418 5114 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404422 5114 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404429 5114 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404433 5114 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404438 5114 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404441 5114 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404446 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404452 5114 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404459 5114 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404464 5114 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404468 5114 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404472 5114 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404477 5114 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404481 5114 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404485 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404489 5114 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404494 5114 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404498 5114 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404502 5114 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404506 5114 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404510 5114 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404514 5114 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404519 5114 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404523 5114 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404528 5114 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404532 5114 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404536 5114 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404540 5114 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404545 5114 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404549 5114 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404553 5114 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404558 5114 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404563 5114 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404567 5114 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404572 5114 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404578 5114 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404582 5114 feature_gate.go:328] unrecognized feature gate: Example2 Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404586 5114 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404591 5114 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404595 5114 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404600 5114 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404604 5114 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404609 5114 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404615 5114 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404622 5114 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.404630 5114 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404827 5114 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404839 5114 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404844 5114 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404849 5114 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404854 5114 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404859 5114 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404863 5114 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404868 5114 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404872 5114 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404876 5114 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404881 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404885 5114 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404890 5114 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404894 5114 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404899 5114 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404903 5114 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404908 5114 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404912 5114 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404917 5114 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404921 5114 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404926 5114 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404930 5114 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404935 5114 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404940 5114 feature_gate.go:328] unrecognized feature gate: Example Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404944 5114 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404949 5114 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404954 5114 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404961 5114 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404965 5114 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404969 5114 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404974 5114 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404978 5114 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404983 5114 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404986 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404990 5114 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404995 5114 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.404999 5114 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405003 5114 feature_gate.go:328] unrecognized feature gate: Example2 Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405007 5114 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405011 5114 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405016 5114 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405020 5114 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405025 5114 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405029 5114 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405033 5114 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405038 5114 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405042 5114 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405046 5114 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405051 5114 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405055 5114 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405060 5114 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405066 5114 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405072 5114 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405077 5114 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405081 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405087 5114 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405092 5114 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405096 5114 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405100 5114 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405104 5114 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405108 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405112 5114 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405117 5114 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405121 5114 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405125 5114 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405130 5114 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405134 5114 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405138 5114 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405142 5114 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405146 5114 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405150 5114 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405153 5114 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405158 5114 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405162 5114 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405166 5114 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405170 5114 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405174 5114 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405179 5114 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405183 5114 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405187 5114 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405192 5114 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405196 5114 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405201 5114 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405205 5114 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405209 5114 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 10 15:46:14 crc kubenswrapper[5114]: W1210 15:46:14.405213 5114 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.405220 5114 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.405617 5114 server.go:962] "Client rotation is on, will bootstrap in background" Dec 10 15:46:14 crc kubenswrapper[5114]: E1210 15:46:14.407896 5114 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.410983 5114 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.411082 5114 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.411601 5114 server.go:1019] "Starting client certificate rotation" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.411713 5114 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.411753 5114 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.416315 5114 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 10 15:46:14 crc kubenswrapper[5114]: E1210 15:46:14.417611 5114 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.418003 5114 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.424331 5114 log.go:25] "Validated CRI v1 runtime API" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.449218 5114 log.go:25] "Validated CRI v1 image API" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.450961 5114 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.452923 5114 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2025-12-10-15-40-17-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.452955 5114 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:44 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.471036 5114 manager.go:217] Machine: {Timestamp:2025-12-10 15:46:14.469852259 +0000 UTC m=+0.190653456 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33649926144 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:ea4de44f-fffe-48de-b641-4c0ea71eb3ac BootID:f1983090-c631-42b8-889c-661e5120de50 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:44 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824963072 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:60:f5:bd Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:60:f5:bd Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:3a:87:ce Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:b2:c4:e8 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:b9:5f:f5 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:eb:4b:2c Speed:-1 Mtu:1496} {Name:eth10 MacAddress:7a:e8:b0:93:41:db Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:3a:c2:21:d6:b1:3d Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649926144 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.471232 5114 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.471404 5114 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.472800 5114 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.472859 5114 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.473051 5114 topology_manager.go:138] "Creating topology manager with none policy" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.473061 5114 container_manager_linux.go:306] "Creating device plugin manager" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.473082 5114 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.473265 5114 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.473620 5114 state_mem.go:36] "Initialized new in-memory state store" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.473795 5114 server.go:1267] "Using root directory" path="/var/lib/kubelet" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.474382 5114 kubelet.go:491] "Attempting to sync node with API server" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.474406 5114 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.474422 5114 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.474439 5114 kubelet.go:397] "Adding apiserver pod source" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.474455 5114 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.476338 5114 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.476353 5114 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 10 15:46:14 crc kubenswrapper[5114]: E1210 15:46:14.476904 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.477827 5114 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.477841 5114 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 10 15:46:14 crc kubenswrapper[5114]: E1210 15:46:14.478005 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.479961 5114 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.480370 5114 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.480957 5114 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.481589 5114 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.481655 5114 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.481674 5114 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.481684 5114 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.481694 5114 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.481701 5114 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.481710 5114 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.481717 5114 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.481729 5114 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.481747 5114 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.481778 5114 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.481960 5114 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.482220 5114 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.482265 5114 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.483470 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.494533 5114 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.494624 5114 server.go:1295] "Started kubelet" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.494908 5114 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.495189 5114 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.495401 5114 server_v1.go:47] "podresources" method="list" useActivePods=true Dec 10 15:46:14 crc systemd[1]: Started Kubernetes Kubelet. Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.496979 5114 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.498395 5114 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.498620 5114 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 10 15:46:14 crc kubenswrapper[5114]: E1210 15:46:14.497075 5114 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.224:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187fe53048c4b29d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:14.494565021 +0000 UTC m=+0.215366198,LastTimestamp:2025-12-10 15:46:14.494565021 +0000 UTC m=+0.215366198,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.499105 5114 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.499248 5114 volume_manager.go:295] "The desired_state_of_world populator starts" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.499360 5114 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 10 15:46:14 crc kubenswrapper[5114]: E1210 15:46:14.499248 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.499618 5114 server.go:317] "Adding debug handlers to kubelet server" Dec 10 15:46:14 crc kubenswrapper[5114]: E1210 15:46:14.502625 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="200ms" Dec 10 15:46:14 crc kubenswrapper[5114]: E1210 15:46:14.502731 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.503104 5114 factory.go:55] Registering systemd factory Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.503236 5114 factory.go:223] Registration of the systemd container factory successfully Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.503918 5114 factory.go:153] Registering CRI-O factory Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.503944 5114 factory.go:223] Registration of the crio container factory successfully Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.504021 5114 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.504045 5114 factory.go:103] Registering Raw factory Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.504065 5114 manager.go:1196] Started watching for new ooms in manager Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.504602 5114 manager.go:319] Starting recovery of all containers Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.525430 5114 manager.go:324] Recovery completed Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.541332 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.541913 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.541935 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.541954 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.541970 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.541985 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.542002 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.542015 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.542036 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.542048 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.542062 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.542078 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.542092 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.542105 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.542125 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.542137 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.542151 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.542167 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.542178 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.542180 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.542723 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.542760 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.542784 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.542799 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.542828 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.542849 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.542891 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.542912 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.542925 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.542961 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.542975 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.542995 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.543008 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.543027 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.543060 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.543098 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.543113 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.543131 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.543149 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.543163 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.543179 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.543193 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.543211 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.543229 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.543243 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.543260 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.543319 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.543341 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.543356 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.543371 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.543390 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.543405 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.543423 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.543438 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.543455 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.543470 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.543486 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.543526 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.543544 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.543561 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.543574 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.548034 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.548095 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.548112 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.548687 5114 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549026 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549047 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549061 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549074 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549108 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549121 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549134 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549146 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549159 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549193 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549208 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549219 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549232 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549260 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549290 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549305 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549318 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549350 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549364 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549375 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549389 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549401 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549436 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549446 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549467 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549480 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549521 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549538 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549552 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549583 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549596 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549621 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549632 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549760 5114 cpu_manager.go:222] "Starting CPU manager" policy="none" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549772 5114 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.549795 5114 state_mem.go:36] "Initialized new in-memory state store" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.550197 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.550240 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.550255 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.550293 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.550306 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.550319 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.550332 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.550344 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.550378 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.550391 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.550403 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.550414 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.550426 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.550460 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.550473 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.550509 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.550545 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.550557 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.550570 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.550621 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.550785 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.550824 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.550842 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.550857 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.550898 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.550919 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.550934 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.550945 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.550985 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551004 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551018 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551030 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551066 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551083 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551099 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551112 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551155 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551174 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551186 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551201 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551241 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551256 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551304 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551322 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551337 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551351 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551393 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551408 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551424 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551436 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551470 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551483 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551495 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551508 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551549 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551569 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551586 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551603 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551647 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551664 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551680 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551695 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551736 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551755 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551773 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551788 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551805 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551821 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551836 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551878 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551896 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551913 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551930 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551945 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551959 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551972 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551985 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.551997 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552011 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552023 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552035 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552047 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552060 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552093 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552107 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552120 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552133 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552145 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552157 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552171 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552183 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552195 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552207 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552219 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552231 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552243 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552257 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552286 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552300 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552312 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552324 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552337 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552350 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552362 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552373 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552387 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552399 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552411 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552423 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552436 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552448 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552482 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552496 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552508 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552521 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552532 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552544 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552556 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552568 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552581 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552594 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552607 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552620 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552665 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552678 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552690 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552722 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552734 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552746 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552757 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552769 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552967 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.552987 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.553112 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.553181 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.553196 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.553208 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.553220 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.553232 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.553242 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.553255 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.553266 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.553305 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.553317 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.553328 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.553341 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.553352 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.553364 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.553377 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.553389 5114 reconstruct.go:97] "Volume reconstruction finished" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.553397 5114 reconciler.go:26] "Reconciler: start to sync state" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.556234 5114 policy_none.go:49] "None policy: Start" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.556306 5114 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.556325 5114 state_mem.go:35] "Initializing new in-memory state store" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.564531 5114 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.566925 5114 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.567020 5114 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.567122 5114 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.567182 5114 kubelet.go:2451] "Starting kubelet main sync loop" Dec 10 15:46:14 crc kubenswrapper[5114]: E1210 15:46:14.567614 5114 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 10 15:46:14 crc kubenswrapper[5114]: E1210 15:46:14.569179 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 10 15:46:14 crc kubenswrapper[5114]: E1210 15:46:14.599695 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.602556 5114 manager.go:341] "Starting Device Plugin manager" Dec 10 15:46:14 crc kubenswrapper[5114]: E1210 15:46:14.602633 5114 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.602651 5114 server.go:85] "Starting device plugin registration server" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.603332 5114 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.603356 5114 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.603498 5114 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.603832 5114 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.603846 5114 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 10 15:46:14 crc kubenswrapper[5114]: E1210 15:46:14.608800 5114 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Dec 10 15:46:14 crc kubenswrapper[5114]: E1210 15:46:14.608847 5114 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.668541 5114 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.668734 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.669726 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.669779 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.669797 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.670647 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.670810 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.670850 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.671489 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.671490 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.671543 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.671559 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.671519 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.671644 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.672313 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.672539 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.672571 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.673046 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.673070 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.673068 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.673100 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.673114 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.673080 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.673934 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.673937 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.674066 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.674509 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.674535 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.674556 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.674568 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.674539 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.674593 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.675292 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.675362 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.675394 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.675952 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.675984 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.675996 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.675957 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.676043 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.676052 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.676741 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.676769 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.677329 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.677351 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.677362 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:14 crc kubenswrapper[5114]: E1210 15:46:14.699282 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:14 crc kubenswrapper[5114]: E1210 15:46:14.703054 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="400ms" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.703495 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.704374 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.704430 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.704443 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.704461 5114 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 10 15:46:14 crc kubenswrapper[5114]: E1210 15:46:14.704769 5114 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.224:6443: connect: connection refused" node="crc" Dec 10 15:46:14 crc kubenswrapper[5114]: E1210 15:46:14.718571 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:14 crc kubenswrapper[5114]: E1210 15:46:14.738736 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.757105 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: E1210 15:46:14.757319 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.757359 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.757413 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.757436 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.757473 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.757494 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.758331 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.758553 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.758851 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.758905 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.759008 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.759061 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.759036 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.759144 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.759205 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.759257 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.759320 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.759358 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.759396 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.759430 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.759470 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.759514 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.759552 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.759588 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.759626 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.759670 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.759700 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.759737 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.760100 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.760354 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: E1210 15:46:14.765453 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.860963 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.861006 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.861032 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.861053 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.861074 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.861094 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.861112 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.861145 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.861200 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.861177 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.861194 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.861219 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.861257 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.861266 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.861160 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.861390 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.861387 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.861564 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.861567 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.861702 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.861727 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.861731 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.861746 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.861772 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.861774 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.861800 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.861827 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.861847 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.861671 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.861880 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.861773 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.861988 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.905843 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.907136 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.907293 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.907375 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:14 crc kubenswrapper[5114]: I1210 15:46:14.907468 5114 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 10 15:46:14 crc kubenswrapper[5114]: E1210 15:46:14.908088 5114 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.224:6443: connect: connection refused" node="crc" Dec 10 15:46:15 crc kubenswrapper[5114]: I1210 15:46:15.000432 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:46:15 crc kubenswrapper[5114]: I1210 15:46:15.019160 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 15:46:15 crc kubenswrapper[5114]: W1210 15:46:15.023645 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-21d01321689959239da14498e39fd85fabd6173f90727c49139d3e60299ec262 WatchSource:0}: Error finding container 21d01321689959239da14498e39fd85fabd6173f90727c49139d3e60299ec262: Status 404 returned error can't find the container with id 21d01321689959239da14498e39fd85fabd6173f90727c49139d3e60299ec262 Dec 10 15:46:15 crc kubenswrapper[5114]: I1210 15:46:15.030538 5114 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 15:46:15 crc kubenswrapper[5114]: W1210 15:46:15.038100 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-b9d4457f6213cc7872d070a1a843f077b3225788378cacef729d22a410997434 WatchSource:0}: Error finding container b9d4457f6213cc7872d070a1a843f077b3225788378cacef729d22a410997434: Status 404 returned error can't find the container with id b9d4457f6213cc7872d070a1a843f077b3225788378cacef729d22a410997434 Dec 10 15:46:15 crc kubenswrapper[5114]: I1210 15:46:15.039099 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 15:46:15 crc kubenswrapper[5114]: W1210 15:46:15.055599 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-5d20854fb576afb7ce958a42c52459ce9e05578acdde7edf156cd201024a6f9c WatchSource:0}: Error finding container 5d20854fb576afb7ce958a42c52459ce9e05578acdde7edf156cd201024a6f9c: Status 404 returned error can't find the container with id 5d20854fb576afb7ce958a42c52459ce9e05578acdde7edf156cd201024a6f9c Dec 10 15:46:15 crc kubenswrapper[5114]: I1210 15:46:15.058136 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 10 15:46:15 crc kubenswrapper[5114]: I1210 15:46:15.066440 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:46:15 crc kubenswrapper[5114]: W1210 15:46:15.080322 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-e4069ac83231534007280e3632349adcbff5511b9120cf44b84e9bab05f744ca WatchSource:0}: Error finding container e4069ac83231534007280e3632349adcbff5511b9120cf44b84e9bab05f744ca: Status 404 returned error can't find the container with id e4069ac83231534007280e3632349adcbff5511b9120cf44b84e9bab05f744ca Dec 10 15:46:15 crc kubenswrapper[5114]: W1210 15:46:15.086466 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-9e73a140280c341958993378a5159e621626801b98da2e15c7548897f76067b5 WatchSource:0}: Error finding container 9e73a140280c341958993378a5159e621626801b98da2e15c7548897f76067b5: Status 404 returned error can't find the container with id 9e73a140280c341958993378a5159e621626801b98da2e15c7548897f76067b5 Dec 10 15:46:15 crc kubenswrapper[5114]: E1210 15:46:15.104528 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="800ms" Dec 10 15:46:15 crc kubenswrapper[5114]: I1210 15:46:15.308818 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:15 crc kubenswrapper[5114]: I1210 15:46:15.311139 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:15 crc kubenswrapper[5114]: I1210 15:46:15.311208 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:15 crc kubenswrapper[5114]: I1210 15:46:15.311222 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:15 crc kubenswrapper[5114]: I1210 15:46:15.311261 5114 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 10 15:46:15 crc kubenswrapper[5114]: E1210 15:46:15.312120 5114 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.224:6443: connect: connection refused" node="crc" Dec 10 15:46:15 crc kubenswrapper[5114]: E1210 15:46:15.399174 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 10 15:46:15 crc kubenswrapper[5114]: I1210 15:46:15.484378 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Dec 10 15:46:15 crc kubenswrapper[5114]: I1210 15:46:15.575759 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"9e73a140280c341958993378a5159e621626801b98da2e15c7548897f76067b5"} Dec 10 15:46:15 crc kubenswrapper[5114]: I1210 15:46:15.578758 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"e4069ac83231534007280e3632349adcbff5511b9120cf44b84e9bab05f744ca"} Dec 10 15:46:15 crc kubenswrapper[5114]: I1210 15:46:15.581315 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"5d20854fb576afb7ce958a42c52459ce9e05578acdde7edf156cd201024a6f9c"} Dec 10 15:46:15 crc kubenswrapper[5114]: I1210 15:46:15.582803 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"bf99e2dd5c01828fb3db803c3d59c571d32f320bec0325579c1510965bea01ab"} Dec 10 15:46:15 crc kubenswrapper[5114]: I1210 15:46:15.582842 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"b9d4457f6213cc7872d070a1a843f077b3225788378cacef729d22a410997434"} Dec 10 15:46:15 crc kubenswrapper[5114]: I1210 15:46:15.582968 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:15 crc kubenswrapper[5114]: I1210 15:46:15.583810 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:15 crc kubenswrapper[5114]: I1210 15:46:15.583834 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:15 crc kubenswrapper[5114]: I1210 15:46:15.583843 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:15 crc kubenswrapper[5114]: E1210 15:46:15.584024 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:15 crc kubenswrapper[5114]: I1210 15:46:15.585321 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"21d01321689959239da14498e39fd85fabd6173f90727c49139d3e60299ec262"} Dec 10 15:46:15 crc kubenswrapper[5114]: E1210 15:46:15.783803 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 10 15:46:15 crc kubenswrapper[5114]: E1210 15:46:15.906036 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="1.6s" Dec 10 15:46:15 crc kubenswrapper[5114]: E1210 15:46:15.985861 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 10 15:46:16 crc kubenswrapper[5114]: E1210 15:46:16.013318 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.112889 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.114123 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.114187 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.114201 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.114228 5114 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 10 15:46:16 crc kubenswrapper[5114]: E1210 15:46:16.114771 5114 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.224:6443: connect: connection refused" node="crc" Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.484076 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.548463 5114 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 10 15:46:16 crc kubenswrapper[5114]: E1210 15:46:16.549448 5114 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.588676 5114 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="7e3d3b6b0e188659783d2b384d22a05ba8962e4fa49cd4caae040921c9add613" exitCode=0 Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.588748 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"7e3d3b6b0e188659783d2b384d22a05ba8962e4fa49cd4caae040921c9add613"} Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.588947 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.589645 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.589698 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.589716 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:16 crc kubenswrapper[5114]: E1210 15:46:16.589985 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.590223 5114 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="101e3958feb79a37918d043f01289b15aa43519052915151289b2df11a4c798e" exitCode=0 Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.590254 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"101e3958feb79a37918d043f01289b15aa43519052915151289b2df11a4c798e"} Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.590582 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.591450 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.591470 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.591511 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.591529 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:16 crc kubenswrapper[5114]: E1210 15:46:16.591848 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.592056 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.592115 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.592159 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.592261 5114 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="108af1094b4ecac73d954933b32171f5e697d11d78490d831db63f315177de7a" exitCode=0 Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.592346 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"108af1094b4ecac73d954933b32171f5e697d11d78490d831db63f315177de7a"} Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.592365 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.593058 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.593092 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.593106 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:16 crc kubenswrapper[5114]: E1210 15:46:16.593343 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.593851 5114 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="bf99e2dd5c01828fb3db803c3d59c571d32f320bec0325579c1510965bea01ab" exitCode=0 Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.593911 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"bf99e2dd5c01828fb3db803c3d59c571d32f320bec0325579c1510965bea01ab"} Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.594021 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.594439 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.594468 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.594480 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:16 crc kubenswrapper[5114]: E1210 15:46:16.594661 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.598252 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"9ec7a41d072aa02f59def36f4c2802872ef70cbd48046c3e3d6f6ccd6b254c53"} Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.598339 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"1daca1262ac174a242cff74011ab4da1c00a8caaf4bc44b58af5400ae24d3226"} Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.598370 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"4c19e0260e8980b12b59f394a8355cee2eee1dc159e14081a0ff23cebdd4e9f0"} Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.598395 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"800d1520c7107344f8b6d771d0fecfb9ca2644d8efe597cabd69c5de72a571ef"} Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.598642 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.599963 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.600000 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:16 crc kubenswrapper[5114]: I1210 15:46:16.600012 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:16 crc kubenswrapper[5114]: E1210 15:46:16.600200 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:17 crc kubenswrapper[5114]: E1210 15:46:17.071887 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 10 15:46:17 crc kubenswrapper[5114]: E1210 15:46:17.099801 5114 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.224:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187fe53048c4b29d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:14.494565021 +0000 UTC m=+0.215366198,LastTimestamp:2025-12-10 15:46:14.494565021 +0000 UTC m=+0.215366198,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:17 crc kubenswrapper[5114]: I1210 15:46:17.248069 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:46:17 crc kubenswrapper[5114]: I1210 15:46:17.603514 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"b63509d96fe3793fb1dffe2943da9a38a875dd373fbad85638d39878168af249"} Dec 10 15:46:17 crc kubenswrapper[5114]: I1210 15:46:17.603673 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:17 crc kubenswrapper[5114]: I1210 15:46:17.604473 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:17 crc kubenswrapper[5114]: I1210 15:46:17.604504 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:17 crc kubenswrapper[5114]: I1210 15:46:17.604516 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:17 crc kubenswrapper[5114]: E1210 15:46:17.604673 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:17 crc kubenswrapper[5114]: I1210 15:46:17.607376 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"d65e5ca10eda1aed2b331dff87ea726c9ba50cfbb47bf07c74e0ce4d6d5b99b9"} Dec 10 15:46:17 crc kubenswrapper[5114]: I1210 15:46:17.607402 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"8822b68284631476f7526c5a6629b3cbe113320b8716837d4be7ed679ea64b7b"} Dec 10 15:46:17 crc kubenswrapper[5114]: I1210 15:46:17.607413 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"82cf7cb8d12a0390623c03e2a919f8f30da8ac13d60bbaaca7bd32778e9816e5"} Dec 10 15:46:17 crc kubenswrapper[5114]: I1210 15:46:17.607508 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:17 crc kubenswrapper[5114]: I1210 15:46:17.608021 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:17 crc kubenswrapper[5114]: I1210 15:46:17.608044 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:17 crc kubenswrapper[5114]: I1210 15:46:17.608056 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:17 crc kubenswrapper[5114]: E1210 15:46:17.608199 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:17 crc kubenswrapper[5114]: I1210 15:46:17.610415 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"0f8dd78b836cacc6ac7bee1a11730500c94192df5a045eb37ae1c137a3cc0ad6"} Dec 10 15:46:17 crc kubenswrapper[5114]: I1210 15:46:17.610439 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"7398b71862f7cfabefc5644c5d6b4924bbde47edadad7f240aa37599d2b3da9d"} Dec 10 15:46:17 crc kubenswrapper[5114]: I1210 15:46:17.610450 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"55ad03eb1a337191c414a5dbd0864a29632396ff234b68505a9a4b65c90d8eb5"} Dec 10 15:46:17 crc kubenswrapper[5114]: I1210 15:46:17.610460 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"c9a7475ba48862dfcb11fe65264384be264b4b7acd30761bc650e70dd3a78abb"} Dec 10 15:46:17 crc kubenswrapper[5114]: I1210 15:46:17.613026 5114 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="000c0ac3fe264d2edae20d00ae4b904a9c638f104925be4c2999a32625c2384e" exitCode=0 Dec 10 15:46:17 crc kubenswrapper[5114]: I1210 15:46:17.613146 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:17 crc kubenswrapper[5114]: I1210 15:46:17.613220 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:17 crc kubenswrapper[5114]: I1210 15:46:17.613332 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"000c0ac3fe264d2edae20d00ae4b904a9c638f104925be4c2999a32625c2384e"} Dec 10 15:46:17 crc kubenswrapper[5114]: I1210 15:46:17.613789 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:17 crc kubenswrapper[5114]: I1210 15:46:17.613822 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:17 crc kubenswrapper[5114]: I1210 15:46:17.613838 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:17 crc kubenswrapper[5114]: E1210 15:46:17.614173 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:17 crc kubenswrapper[5114]: I1210 15:46:17.614646 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:17 crc kubenswrapper[5114]: I1210 15:46:17.614670 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:17 crc kubenswrapper[5114]: I1210 15:46:17.614687 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:17 crc kubenswrapper[5114]: E1210 15:46:17.615333 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:17 crc kubenswrapper[5114]: I1210 15:46:17.715407 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:17 crc kubenswrapper[5114]: I1210 15:46:17.716282 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:17 crc kubenswrapper[5114]: I1210 15:46:17.716308 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:17 crc kubenswrapper[5114]: I1210 15:46:17.716316 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:17 crc kubenswrapper[5114]: I1210 15:46:17.716340 5114 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 10 15:46:18 crc kubenswrapper[5114]: I1210 15:46:18.622710 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"d660c46f43ddf7099017beb0aa69f3e5a073829386002b1c17d2d4820d1176b0"} Dec 10 15:46:18 crc kubenswrapper[5114]: I1210 15:46:18.622965 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:18 crc kubenswrapper[5114]: I1210 15:46:18.624083 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:18 crc kubenswrapper[5114]: I1210 15:46:18.624131 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:18 crc kubenswrapper[5114]: I1210 15:46:18.624146 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:18 crc kubenswrapper[5114]: E1210 15:46:18.624468 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:18 crc kubenswrapper[5114]: I1210 15:46:18.625602 5114 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="90da8daaae30e60295160aefe8748f6cf28eda2cd17d933569c0320aebc57f64" exitCode=0 Dec 10 15:46:18 crc kubenswrapper[5114]: I1210 15:46:18.625664 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"90da8daaae30e60295160aefe8748f6cf28eda2cd17d933569c0320aebc57f64"} Dec 10 15:46:18 crc kubenswrapper[5114]: I1210 15:46:18.625849 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:18 crc kubenswrapper[5114]: I1210 15:46:18.625889 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:18 crc kubenswrapper[5114]: I1210 15:46:18.626249 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:18 crc kubenswrapper[5114]: I1210 15:46:18.626600 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:18 crc kubenswrapper[5114]: I1210 15:46:18.626634 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:18 crc kubenswrapper[5114]: I1210 15:46:18.626645 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:18 crc kubenswrapper[5114]: I1210 15:46:18.626664 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:18 crc kubenswrapper[5114]: I1210 15:46:18.626692 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:18 crc kubenswrapper[5114]: I1210 15:46:18.626702 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:18 crc kubenswrapper[5114]: E1210 15:46:18.626940 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:18 crc kubenswrapper[5114]: E1210 15:46:18.627121 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:18 crc kubenswrapper[5114]: I1210 15:46:18.627798 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:18 crc kubenswrapper[5114]: I1210 15:46:18.627856 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:18 crc kubenswrapper[5114]: I1210 15:46:18.627878 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:18 crc kubenswrapper[5114]: E1210 15:46:18.628343 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:18 crc kubenswrapper[5114]: I1210 15:46:18.642483 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:46:18 crc kubenswrapper[5114]: I1210 15:46:18.657495 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:46:19 crc kubenswrapper[5114]: I1210 15:46:19.250490 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 15:46:19 crc kubenswrapper[5114]: I1210 15:46:19.250825 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:19 crc kubenswrapper[5114]: I1210 15:46:19.251949 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:19 crc kubenswrapper[5114]: I1210 15:46:19.252002 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:19 crc kubenswrapper[5114]: I1210 15:46:19.252017 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:19 crc kubenswrapper[5114]: E1210 15:46:19.252631 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:19 crc kubenswrapper[5114]: I1210 15:46:19.635207 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"251a7ed18067c8bcbcbcb38700fe905a2a4ebf34fef9f02a6ffc9f78a334bc27"} Dec 10 15:46:19 crc kubenswrapper[5114]: I1210 15:46:19.635285 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"447746eb6e190728d80f154f34d6c4c3cd6a364d95c18a4c109e1a2d00fbcf27"} Dec 10 15:46:19 crc kubenswrapper[5114]: I1210 15:46:19.635303 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"85e77e659fccf9ba6e2cc6e99afbafd6be1703e401429ba871243247e0c20a84"} Dec 10 15:46:19 crc kubenswrapper[5114]: I1210 15:46:19.635315 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"4654b1e58183f9508823b58dc37a09482feafd97c887cc56f9d1c793999ee516"} Dec 10 15:46:19 crc kubenswrapper[5114]: I1210 15:46:19.635326 5114 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 15:46:19 crc kubenswrapper[5114]: I1210 15:46:19.635394 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:19 crc kubenswrapper[5114]: I1210 15:46:19.635425 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:19 crc kubenswrapper[5114]: I1210 15:46:19.636281 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:19 crc kubenswrapper[5114]: I1210 15:46:19.636287 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:19 crc kubenswrapper[5114]: I1210 15:46:19.636313 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:19 crc kubenswrapper[5114]: I1210 15:46:19.636325 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:19 crc kubenswrapper[5114]: I1210 15:46:19.636314 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:19 crc kubenswrapper[5114]: I1210 15:46:19.636343 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:19 crc kubenswrapper[5114]: E1210 15:46:19.636754 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:19 crc kubenswrapper[5114]: E1210 15:46:19.637144 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:20 crc kubenswrapper[5114]: I1210 15:46:20.248446 5114 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Dec 10 15:46:20 crc kubenswrapper[5114]: I1210 15:46:20.249084 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Dec 10 15:46:20 crc kubenswrapper[5114]: I1210 15:46:20.644922 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"43234809c1296bc87d3909492e145b0720e62cf92728f1f24baeac176f8cfc95"} Dec 10 15:46:20 crc kubenswrapper[5114]: I1210 15:46:20.645052 5114 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 15:46:20 crc kubenswrapper[5114]: I1210 15:46:20.645086 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:20 crc kubenswrapper[5114]: I1210 15:46:20.645121 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:20 crc kubenswrapper[5114]: I1210 15:46:20.645838 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:20 crc kubenswrapper[5114]: I1210 15:46:20.645881 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:20 crc kubenswrapper[5114]: I1210 15:46:20.645894 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:20 crc kubenswrapper[5114]: I1210 15:46:20.645958 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:20 crc kubenswrapper[5114]: I1210 15:46:20.646009 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:20 crc kubenswrapper[5114]: I1210 15:46:20.646036 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:20 crc kubenswrapper[5114]: E1210 15:46:20.646244 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:20 crc kubenswrapper[5114]: E1210 15:46:20.646552 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:20 crc kubenswrapper[5114]: I1210 15:46:20.822755 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:46:20 crc kubenswrapper[5114]: I1210 15:46:20.823035 5114 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 15:46:20 crc kubenswrapper[5114]: I1210 15:46:20.823093 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:20 crc kubenswrapper[5114]: I1210 15:46:20.824091 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:20 crc kubenswrapper[5114]: I1210 15:46:20.824134 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:20 crc kubenswrapper[5114]: I1210 15:46:20.824151 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:20 crc kubenswrapper[5114]: E1210 15:46:20.824669 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:20 crc kubenswrapper[5114]: I1210 15:46:20.858875 5114 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 10 15:46:21 crc kubenswrapper[5114]: I1210 15:46:21.046127 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:46:21 crc kubenswrapper[5114]: I1210 15:46:21.296073 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 10 15:46:21 crc kubenswrapper[5114]: I1210 15:46:21.647838 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:21 crc kubenswrapper[5114]: I1210 15:46:21.648026 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:21 crc kubenswrapper[5114]: I1210 15:46:21.648912 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:21 crc kubenswrapper[5114]: I1210 15:46:21.648948 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:21 crc kubenswrapper[5114]: I1210 15:46:21.648979 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:21 crc kubenswrapper[5114]: I1210 15:46:21.649002 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:21 crc kubenswrapper[5114]: I1210 15:46:21.648984 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:21 crc kubenswrapper[5114]: I1210 15:46:21.649089 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:21 crc kubenswrapper[5114]: E1210 15:46:21.649797 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:21 crc kubenswrapper[5114]: E1210 15:46:21.652339 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:22 crc kubenswrapper[5114]: I1210 15:46:22.236704 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:46:22 crc kubenswrapper[5114]: I1210 15:46:22.650490 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:22 crc kubenswrapper[5114]: I1210 15:46:22.650630 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:22 crc kubenswrapper[5114]: I1210 15:46:22.651447 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:22 crc kubenswrapper[5114]: I1210 15:46:22.651485 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:22 crc kubenswrapper[5114]: I1210 15:46:22.651499 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:22 crc kubenswrapper[5114]: I1210 15:46:22.651598 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:22 crc kubenswrapper[5114]: I1210 15:46:22.651632 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:22 crc kubenswrapper[5114]: I1210 15:46:22.651648 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:22 crc kubenswrapper[5114]: E1210 15:46:22.651896 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:22 crc kubenswrapper[5114]: E1210 15:46:22.652525 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:24 crc kubenswrapper[5114]: I1210 15:46:24.295931 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Dec 10 15:46:24 crc kubenswrapper[5114]: I1210 15:46:24.296318 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:24 crc kubenswrapper[5114]: I1210 15:46:24.297436 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:24 crc kubenswrapper[5114]: I1210 15:46:24.297492 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:24 crc kubenswrapper[5114]: I1210 15:46:24.297507 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:24 crc kubenswrapper[5114]: E1210 15:46:24.298001 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:24 crc kubenswrapper[5114]: E1210 15:46:24.609111 5114 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 10 15:46:24 crc kubenswrapper[5114]: I1210 15:46:24.709519 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:46:24 crc kubenswrapper[5114]: I1210 15:46:24.709827 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:24 crc kubenswrapper[5114]: I1210 15:46:24.711004 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:24 crc kubenswrapper[5114]: I1210 15:46:24.711090 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:24 crc kubenswrapper[5114]: I1210 15:46:24.711106 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:24 crc kubenswrapper[5114]: E1210 15:46:24.711730 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:26 crc kubenswrapper[5114]: I1210 15:46:25.684174 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:46:26 crc kubenswrapper[5114]: I1210 15:46:25.684595 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:26 crc kubenswrapper[5114]: I1210 15:46:25.685610 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:26 crc kubenswrapper[5114]: I1210 15:46:25.685654 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:26 crc kubenswrapper[5114]: I1210 15:46:25.685668 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:26 crc kubenswrapper[5114]: E1210 15:46:25.686114 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:26 crc kubenswrapper[5114]: I1210 15:46:26.312465 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:46:26 crc kubenswrapper[5114]: I1210 15:46:26.659405 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:26 crc kubenswrapper[5114]: I1210 15:46:26.660144 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:26 crc kubenswrapper[5114]: I1210 15:46:26.660203 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:26 crc kubenswrapper[5114]: I1210 15:46:26.660221 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:26 crc kubenswrapper[5114]: E1210 15:46:26.660661 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:27 crc kubenswrapper[5114]: I1210 15:46:27.484971 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 10 15:46:27 crc kubenswrapper[5114]: E1210 15:46:27.507521 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Dec 10 15:46:27 crc kubenswrapper[5114]: I1210 15:46:27.600300 5114 trace.go:236] Trace[1948328191]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (10-Dec-2025 15:46:17.599) (total time: 10000ms): Dec 10 15:46:27 crc kubenswrapper[5114]: Trace[1948328191]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (15:46:27.600) Dec 10 15:46:27 crc kubenswrapper[5114]: Trace[1948328191]: [10.000734081s] [10.000734081s] END Dec 10 15:46:27 crc kubenswrapper[5114]: E1210 15:46:27.600342 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 10 15:46:27 crc kubenswrapper[5114]: E1210 15:46:27.717804 5114 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 10 15:46:28 crc kubenswrapper[5114]: I1210 15:46:28.482800 5114 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 10 15:46:28 crc kubenswrapper[5114]: I1210 15:46:28.482940 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 10 15:46:28 crc kubenswrapper[5114]: I1210 15:46:28.492754 5114 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 10 15:46:28 crc kubenswrapper[5114]: I1210 15:46:28.492859 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 10 15:46:30 crc kubenswrapper[5114]: I1210 15:46:30.249414 5114 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Dec 10 15:46:30 crc kubenswrapper[5114]: I1210 15:46:30.249605 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Dec 10 15:46:30 crc kubenswrapper[5114]: E1210 15:46:30.709762 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="6.4s" Dec 10 15:46:30 crc kubenswrapper[5114]: I1210 15:46:30.828878 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:46:30 crc kubenswrapper[5114]: I1210 15:46:30.829166 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:30 crc kubenswrapper[5114]: I1210 15:46:30.829731 5114 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 10 15:46:30 crc kubenswrapper[5114]: I1210 15:46:30.829819 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 10 15:46:30 crc kubenswrapper[5114]: I1210 15:46:30.830483 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:30 crc kubenswrapper[5114]: I1210 15:46:30.830556 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:30 crc kubenswrapper[5114]: I1210 15:46:30.830577 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:30 crc kubenswrapper[5114]: E1210 15:46:30.831235 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:30 crc kubenswrapper[5114]: I1210 15:46:30.834231 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:46:30 crc kubenswrapper[5114]: I1210 15:46:30.918591 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:30 crc kubenswrapper[5114]: I1210 15:46:30.919680 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:30 crc kubenswrapper[5114]: I1210 15:46:30.919744 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:30 crc kubenswrapper[5114]: I1210 15:46:30.919763 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:30 crc kubenswrapper[5114]: I1210 15:46:30.919799 5114 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 10 15:46:30 crc kubenswrapper[5114]: E1210 15:46:30.933906 5114 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 10 15:46:31 crc kubenswrapper[5114]: E1210 15:46:31.017526 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 10 15:46:31 crc kubenswrapper[5114]: I1210 15:46:31.328147 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 10 15:46:31 crc kubenswrapper[5114]: I1210 15:46:31.328493 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:31 crc kubenswrapper[5114]: I1210 15:46:31.332218 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:31 crc kubenswrapper[5114]: I1210 15:46:31.332363 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:31 crc kubenswrapper[5114]: I1210 15:46:31.332383 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:31 crc kubenswrapper[5114]: E1210 15:46:31.333432 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:31 crc kubenswrapper[5114]: I1210 15:46:31.342028 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 10 15:46:31 crc kubenswrapper[5114]: I1210 15:46:31.674675 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:31 crc kubenswrapper[5114]: I1210 15:46:31.674713 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:31 crc kubenswrapper[5114]: I1210 15:46:31.675076 5114 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 10 15:46:31 crc kubenswrapper[5114]: I1210 15:46:31.675156 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 10 15:46:31 crc kubenswrapper[5114]: I1210 15:46:31.675294 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:31 crc kubenswrapper[5114]: I1210 15:46:31.675353 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:31 crc kubenswrapper[5114]: I1210 15:46:31.675367 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:31 crc kubenswrapper[5114]: I1210 15:46:31.675474 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:31 crc kubenswrapper[5114]: I1210 15:46:31.675513 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:31 crc kubenswrapper[5114]: I1210 15:46:31.675526 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:31 crc kubenswrapper[5114]: E1210 15:46:31.676229 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:31 crc kubenswrapper[5114]: E1210 15:46:31.676895 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.486851 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187fe53048c4b29d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:14.494565021 +0000 UTC m=+0.215366198,LastTimestamp:2025-12-10 15:46:14.494565021 +0000 UTC m=+0.215366198,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: I1210 15:46:33.487208 5114 trace.go:236] Trace[206843960]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (10-Dec-2025 15:46:21.374) (total time: 12112ms): Dec 10 15:46:33 crc kubenswrapper[5114]: Trace[206843960]: ---"Objects listed" error:csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope 12112ms (15:46:33.487) Dec 10 15:46:33 crc kubenswrapper[5114]: Trace[206843960]: [12.11233234s] [12.11233234s] END Dec 10 15:46:33 crc kubenswrapper[5114]: I1210 15:46:33.487254 5114 trace.go:236] Trace[608060067]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (10-Dec-2025 15:46:18.736) (total time: 14750ms): Dec 10 15:46:33 crc kubenswrapper[5114]: Trace[608060067]: ---"Objects listed" error:runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope 14750ms (15:46:33.487) Dec 10 15:46:33 crc kubenswrapper[5114]: Trace[608060067]: [14.750775747s] [14.750775747s] END Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.487334 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.487253 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 10 15:46:33 crc kubenswrapper[5114]: I1210 15:46:33.487545 5114 trace.go:236] Trace[1046782009]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (10-Dec-2025 15:46:19.074) (total time: 14412ms): Dec 10 15:46:33 crc kubenswrapper[5114]: Trace[1046782009]: ---"Objects listed" error:services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope 14412ms (15:46:33.487) Dec 10 15:46:33 crc kubenswrapper[5114]: Trace[1046782009]: [14.412522158s] [14.412522158s] END Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.487559 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 10 15:46:33 crc kubenswrapper[5114]: I1210 15:46:33.490826 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.491176 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187fe5304bf4ef8e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:14.548057998 +0000 UTC m=+0.268859185,LastTimestamp:2025-12-10 15:46:14.548057998 +0000 UTC m=+0.268859185,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.493682 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187fe5304bf5a684 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:14.548104836 +0000 UTC m=+0.268906023,LastTimestamp:2025-12-10 15:46:14.548104836 +0000 UTC m=+0.268906023,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: I1210 15:46:33.494375 5114 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.499559 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187fe5304bf5db49 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:14.548118345 +0000 UTC m=+0.268919532,LastTimestamp:2025-12-10 15:46:14.548118345 +0000 UTC m=+0.268919532,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.507347 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187fe5304f65436e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:14.60575115 +0000 UTC m=+0.326552327,LastTimestamp:2025-12-10 15:46:14.60575115 +0000 UTC m=+0.326552327,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.512509 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187fe5304bf4ef8e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187fe5304bf4ef8e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:14.548057998 +0000 UTC m=+0.268859185,LastTimestamp:2025-12-10 15:46:14.669758831 +0000 UTC m=+0.390560018,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.516758 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187fe5304bf5a684\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187fe5304bf5a684 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:14.548104836 +0000 UTC m=+0.268906023,LastTimestamp:2025-12-10 15:46:14.669791189 +0000 UTC m=+0.390592376,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.525667 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187fe5304bf5db49\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187fe5304bf5db49 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:14.548118345 +0000 UTC m=+0.268919532,LastTimestamp:2025-12-10 15:46:14.669804008 +0000 UTC m=+0.390605205,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.530797 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187fe5304bf4ef8e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187fe5304bf4ef8e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:14.548057998 +0000 UTC m=+0.268859185,LastTimestamp:2025-12-10 15:46:14.671509868 +0000 UTC m=+0.392311045,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.536511 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187fe5304bf4ef8e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187fe5304bf4ef8e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:14.548057998 +0000 UTC m=+0.268859185,LastTimestamp:2025-12-10 15:46:14.671533367 +0000 UTC m=+0.392334554,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.542314 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187fe5304bf5a684\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187fe5304bf5a684 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:14.548104836 +0000 UTC m=+0.268906023,LastTimestamp:2025-12-10 15:46:14.671550786 +0000 UTC m=+0.392351983,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.546367 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187fe5304bf5db49\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187fe5304bf5db49 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:14.548118345 +0000 UTC m=+0.268919532,LastTimestamp:2025-12-10 15:46:14.671565325 +0000 UTC m=+0.392366512,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.550983 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187fe5304bf5a684\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187fe5304bf5a684 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:14.548104836 +0000 UTC m=+0.268906023,LastTimestamp:2025-12-10 15:46:14.671597123 +0000 UTC m=+0.392398300,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.555396 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187fe5304bf5db49\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187fe5304bf5db49 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:14.548118345 +0000 UTC m=+0.268919532,LastTimestamp:2025-12-10 15:46:14.67165357 +0000 UTC m=+0.392454747,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.559466 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187fe5304bf4ef8e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187fe5304bf4ef8e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:14.548057998 +0000 UTC m=+0.268859185,LastTimestamp:2025-12-10 15:46:14.673060888 +0000 UTC m=+0.393862065,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.565163 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187fe5304bf5a684\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187fe5304bf5a684 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:14.548104836 +0000 UTC m=+0.268906023,LastTimestamp:2025-12-10 15:46:14.673075787 +0000 UTC m=+0.393876964,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.571013 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187fe5304bf4ef8e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187fe5304bf4ef8e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:14.548057998 +0000 UTC m=+0.268859185,LastTimestamp:2025-12-10 15:46:14.673089516 +0000 UTC m=+0.393890693,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.576559 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187fe5304bf5a684\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187fe5304bf5a684 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:14.548104836 +0000 UTC m=+0.268906023,LastTimestamp:2025-12-10 15:46:14.673108705 +0000 UTC m=+0.393909882,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.581241 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187fe5304bf5db49\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187fe5304bf5db49 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:14.548118345 +0000 UTC m=+0.268919532,LastTimestamp:2025-12-10 15:46:14.673119944 +0000 UTC m=+0.393921121,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.585952 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187fe5304bf5db49\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187fe5304bf5db49 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:14.548118345 +0000 UTC m=+0.268919532,LastTimestamp:2025-12-10 15:46:14.673152392 +0000 UTC m=+0.393953579,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.592373 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187fe5304bf4ef8e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187fe5304bf4ef8e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:14.548057998 +0000 UTC m=+0.268859185,LastTimestamp:2025-12-10 15:46:14.674528632 +0000 UTC m=+0.395329809,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.597532 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187fe5304bf4ef8e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187fe5304bf4ef8e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:14.548057998 +0000 UTC m=+0.268859185,LastTimestamp:2025-12-10 15:46:14.674549291 +0000 UTC m=+0.395350467,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.601640 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187fe5304bf5a684\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187fe5304bf5a684 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:14.548104836 +0000 UTC m=+0.268906023,LastTimestamp:2025-12-10 15:46:14.67456341 +0000 UTC m=+0.395364597,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.605596 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187fe5304bf5db49\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187fe5304bf5db49 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:14.548118345 +0000 UTC m=+0.268919532,LastTimestamp:2025-12-10 15:46:14.674573649 +0000 UTC m=+0.395374816,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.609770 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187fe5304bf5a684\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187fe5304bf5a684 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:14.548104836 +0000 UTC m=+0.268906023,LastTimestamp:2025-12-10 15:46:14.674586298 +0000 UTC m=+0.395387485,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.615659 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187fe53068c02ed2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:15.03114005 +0000 UTC m=+0.751941227,LastTimestamp:2025-12-10 15:46:15.03114005 +0000 UTC m=+0.751941227,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.619630 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187fe530694974cb openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:15.040136395 +0000 UTC m=+0.760937562,LastTimestamp:2025-12-10 15:46:15.040136395 +0000 UTC m=+0.760937562,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.623519 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187fe5306a5b1cae openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:15.058070702 +0000 UTC m=+0.778871899,LastTimestamp:2025-12-10 15:46:15.058070702 +0000 UTC m=+0.778871899,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.627397 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187fe5306c0bac43 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:15.086419011 +0000 UTC m=+0.807220188,LastTimestamp:2025-12-10 15:46:15.086419011 +0000 UTC m=+0.807220188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.632078 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187fe5306c4ea905 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:15.090809093 +0000 UTC m=+0.811610270,LastTimestamp:2025-12-10 15:46:15.090809093 +0000 UTC m=+0.811610270,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.637664 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187fe530885358a0 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:15.56087824 +0000 UTC m=+1.281679417,LastTimestamp:2025-12-10 15:46:15.56087824 +0000 UTC m=+1.281679417,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.643945 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187fe530888380f3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:15.564034291 +0000 UTC m=+1.284835468,LastTimestamp:2025-12-10 15:46:15.564034291 +0000 UTC m=+1.284835468,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.647838 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187fe53088841781 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:15.564072833 +0000 UTC m=+1.284874010,LastTimestamp:2025-12-10 15:46:15.564072833 +0000 UTC m=+1.284874010,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.651449 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187fe530888ed29f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:15.564776095 +0000 UTC m=+1.285577272,LastTimestamp:2025-12-10 15:46:15.564776095 +0000 UTC m=+1.285577272,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.655021 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187fe530888f6a81 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:15.564814977 +0000 UTC m=+1.285616154,LastTimestamp:2025-12-10 15:46:15.564814977 +0000 UTC m=+1.285616154,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.659284 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187fe53089006346 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:15.572218694 +0000 UTC m=+1.293019871,LastTimestamp:2025-12-10 15:46:15.572218694 +0000 UTC m=+1.293019871,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.663615 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187fe530893d8ab8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:15.576226488 +0000 UTC m=+1.297027665,LastTimestamp:2025-12-10 15:46:15.576226488 +0000 UTC m=+1.297027665,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.667372 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187fe530895096ad openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:15.577474733 +0000 UTC m=+1.298275910,LastTimestamp:2025-12-10 15:46:15.577474733 +0000 UTC m=+1.298275910,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.674168 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187fe53089afacbc openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:15.5837063 +0000 UTC m=+1.304507477,LastTimestamp:2025-12-10 15:46:15.5837063 +0000 UTC m=+1.304507477,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.684937 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187fe53089ba6e2d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:15.584411181 +0000 UTC m=+1.305212358,LastTimestamp:2025-12-10 15:46:15.584411181 +0000 UTC m=+1.305212358,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.693221 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187fe53089be8520 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:15.5846792 +0000 UTC m=+1.305480377,LastTimestamp:2025-12-10 15:46:15.5846792 +0000 UTC m=+1.305480377,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.698193 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187fe5309c455be0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:15.895505888 +0000 UTC m=+1.616307075,LastTimestamp:2025-12-10 15:46:15.895505888 +0000 UTC m=+1.616307075,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.702181 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187fe5309d0686ae openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:15.908165294 +0000 UTC m=+1.628966471,LastTimestamp:2025-12-10 15:46:15.908165294 +0000 UTC m=+1.628966471,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.708267 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187fe5309d195663 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:15.909398115 +0000 UTC m=+1.630199292,LastTimestamp:2025-12-10 15:46:15.909398115 +0000 UTC m=+1.630199292,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.714348 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187fe530b1394252 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:16.24703445 +0000 UTC m=+1.967835627,LastTimestamp:2025-12-10 15:46:16.24703445 +0000 UTC m=+1.967835627,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.724612 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187fe530b1bf59a9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:16.255822249 +0000 UTC m=+1.976623426,LastTimestamp:2025-12-10 15:46:16.255822249 +0000 UTC m=+1.976623426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.732775 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187fe530b1cbdbec openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:16.256642028 +0000 UTC m=+1.977443215,LastTimestamp:2025-12-10 15:46:16.256642028 +0000 UTC m=+1.977443215,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.737518 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187fe530bca8c2a8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:16.438891176 +0000 UTC m=+2.159692353,LastTimestamp:2025-12-10 15:46:16.438891176 +0000 UTC m=+2.159692353,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.744420 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187fe530bd5a9a6f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:16.450546287 +0000 UTC m=+2.171347464,LastTimestamp:2025-12-10 15:46:16.450546287 +0000 UTC m=+2.171347464,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.748922 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187fe530c5be4667 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:16.591296103 +0000 UTC m=+2.312097290,LastTimestamp:2025-12-10 15:46:16.591296103 +0000 UTC m=+2.312097290,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.753918 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187fe530c5dad6df openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:16.593168095 +0000 UTC m=+2.313969272,LastTimestamp:2025-12-10 15:46:16.593168095 +0000 UTC m=+2.313969272,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.760201 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187fe530c5eef384 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:16.594486148 +0000 UTC m=+2.315287315,LastTimestamp:2025-12-10 15:46:16.594486148 +0000 UTC m=+2.315287315,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.765846 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187fe530c61ae288 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:16.597365384 +0000 UTC m=+2.318166591,LastTimestamp:2025-12-10 15:46:16.597365384 +0000 UTC m=+2.318166591,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.773737 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187fe530d391a26f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:16.823251567 +0000 UTC m=+2.544052744,LastTimestamp:2025-12-10 15:46:16.823251567 +0000 UTC m=+2.544052744,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.779356 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187fe530d39dcd46 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:16.824048966 +0000 UTC m=+2.544850143,LastTimestamp:2025-12-10 15:46:16.824048966 +0000 UTC m=+2.544850143,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.786630 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187fe530d3a8266c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:16.824727148 +0000 UTC m=+2.545528325,LastTimestamp:2025-12-10 15:46:16.824727148 +0000 UTC m=+2.545528325,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.793185 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187fe530d3aca7c5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:16.825022405 +0000 UTC m=+2.545823582,LastTimestamp:2025-12-10 15:46:16.825022405 +0000 UTC m=+2.545823582,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.800757 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187fe530d44bd1f6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:16.83545343 +0000 UTC m=+2.556254607,LastTimestamp:2025-12-10 15:46:16.83545343 +0000 UTC m=+2.556254607,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.810298 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187fe530d45c81db openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:16.836547035 +0000 UTC m=+2.557348211,LastTimestamp:2025-12-10 15:46:16.836547035 +0000 UTC m=+2.557348211,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.816031 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187fe530d48c1ef1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:16.839667441 +0000 UTC m=+2.560468608,LastTimestamp:2025-12-10 15:46:16.839667441 +0000 UTC m=+2.560468608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.821002 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187fe530d4933547 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:16.840131911 +0000 UTC m=+2.560933088,LastTimestamp:2025-12-10 15:46:16.840131911 +0000 UTC m=+2.560933088,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.826394 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187fe530d49d97e0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:16.840812512 +0000 UTC m=+2.561613679,LastTimestamp:2025-12-10 15:46:16.840812512 +0000 UTC m=+2.561613679,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.833064 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187fe530d4a29df9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:16.841141753 +0000 UTC m=+2.561942930,LastTimestamp:2025-12-10 15:46:16.841141753 +0000 UTC m=+2.561942930,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.839248 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187fe530e0ee4d24 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:17.047428388 +0000 UTC m=+2.768229565,LastTimestamp:2025-12-10 15:46:17.047428388 +0000 UTC m=+2.768229565,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.844084 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187fe530e0f0c1e0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:17.047589344 +0000 UTC m=+2.768390521,LastTimestamp:2025-12-10 15:46:17.047589344 +0000 UTC m=+2.768390521,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.848697 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187fe530e185cd89 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:17.057357193 +0000 UTC m=+2.778158370,LastTimestamp:2025-12-10 15:46:17.057357193 +0000 UTC m=+2.778158370,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.852589 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187fe530e186a0d4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:17.057411284 +0000 UTC m=+2.778212461,LastTimestamp:2025-12-10 15:46:17.057411284 +0000 UTC m=+2.778212461,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.856872 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187fe530e19b9309 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:17.058784009 +0000 UTC m=+2.779585186,LastTimestamp:2025-12-10 15:46:17.058784009 +0000 UTC m=+2.779585186,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.860936 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187fe530e1ae773a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:17.060022074 +0000 UTC m=+2.780823251,LastTimestamp:2025-12-10 15:46:17.060022074 +0000 UTC m=+2.780823251,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.865300 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187fe530ef5bf243 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:17.289495107 +0000 UTC m=+3.010296284,LastTimestamp:2025-12-10 15:46:17.289495107 +0000 UTC m=+3.010296284,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.869056 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187fe530ef70aaf9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:17.290853113 +0000 UTC m=+3.011654290,LastTimestamp:2025-12-10 15:46:17.290853113 +0000 UTC m=+3.011654290,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.872771 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187fe530eff33f91 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:17.299410833 +0000 UTC m=+3.020212020,LastTimestamp:2025-12-10 15:46:17.299410833 +0000 UTC m=+3.020212020,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.877241 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187fe530f00503b7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:17.300575159 +0000 UTC m=+3.021376336,LastTimestamp:2025-12-10 15:46:17.300575159 +0000 UTC m=+3.021376336,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.882419 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187fe530f00874fd openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:17.300800765 +0000 UTC m=+3.021601942,LastTimestamp:2025-12-10 15:46:17.300800765 +0000 UTC m=+3.021601942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.888317 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187fe530fb3ec2da openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:17.488909018 +0000 UTC m=+3.209710195,LastTimestamp:2025-12-10 15:46:17.488909018 +0000 UTC m=+3.209710195,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.893209 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187fe530fc35c588 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:17.505097096 +0000 UTC m=+3.225898273,LastTimestamp:2025-12-10 15:46:17.505097096 +0000 UTC m=+3.225898273,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.898090 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187fe530fc4adb5b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:17.506478939 +0000 UTC m=+3.227280116,LastTimestamp:2025-12-10 15:46:17.506478939 +0000 UTC m=+3.227280116,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.904478 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187fe53102e258b5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:17.617070261 +0000 UTC m=+3.337871478,LastTimestamp:2025-12-10 15:46:17.617070261 +0000 UTC m=+3.337871478,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.905886 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187fe53107dcb6a7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:17.700587175 +0000 UTC m=+3.421388352,LastTimestamp:2025-12-10 15:46:17.700587175 +0000 UTC m=+3.421388352,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.908374 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187fe5310868d3e5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:17.709769701 +0000 UTC m=+3.430570878,LastTimestamp:2025-12-10 15:46:17.709769701 +0000 UTC m=+3.430570878,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.909855 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187fe5310ec70b07 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:17.816607495 +0000 UTC m=+3.537408672,LastTimestamp:2025-12-10 15:46:17.816607495 +0000 UTC m=+3.537408672,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.912598 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187fe5310f634cee openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:17.826847982 +0000 UTC m=+3.547649169,LastTimestamp:2025-12-10 15:46:17.826847982 +0000 UTC m=+3.547649169,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.914321 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187fe5313f42d9fc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:18.630027772 +0000 UTC m=+4.350828959,LastTimestamp:2025-12-10 15:46:18.630027772 +0000 UTC m=+4.350828959,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.917199 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187fe5314d438018 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:18.86495132 +0000 UTC m=+4.585752497,LastTimestamp:2025-12-10 15:46:18.86495132 +0000 UTC m=+4.585752497,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.920638 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187fe5314dc3eaa1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:18.873367201 +0000 UTC m=+4.594168378,LastTimestamp:2025-12-10 15:46:18.873367201 +0000 UTC m=+4.594168378,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.924026 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187fe5314dd36169 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:18.874380649 +0000 UTC m=+4.595181826,LastTimestamp:2025-12-10 15:46:18.874380649 +0000 UTC m=+4.595181826,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.927474 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187fe5315b04f21b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:19.095732763 +0000 UTC m=+4.816533970,LastTimestamp:2025-12-10 15:46:19.095732763 +0000 UTC m=+4.816533970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.931004 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187fe5315bf8d6dd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:19.111716573 +0000 UTC m=+4.832517760,LastTimestamp:2025-12-10 15:46:19.111716573 +0000 UTC m=+4.832517760,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.934765 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187fe5315c0ba0b4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:19.112947892 +0000 UTC m=+4.833749079,LastTimestamp:2025-12-10 15:46:19.112947892 +0000 UTC m=+4.833749079,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.939028 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187fe5316880adb6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:19.321945526 +0000 UTC m=+5.042746713,LastTimestamp:2025-12-10 15:46:19.321945526 +0000 UTC m=+5.042746713,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.944036 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187fe531694e392d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:19.335416109 +0000 UTC m=+5.056217296,LastTimestamp:2025-12-10 15:46:19.335416109 +0000 UTC m=+5.056217296,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.948115 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187fe531695f55a4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:19.336537508 +0000 UTC m=+5.057338705,LastTimestamp:2025-12-10 15:46:19.336537508 +0000 UTC m=+5.057338705,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.953468 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187fe531750b5655 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:19.532359253 +0000 UTC m=+5.253160440,LastTimestamp:2025-12-10 15:46:19.532359253 +0000 UTC m=+5.253160440,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.957428 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187fe53175d9135e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:19.545842526 +0000 UTC m=+5.266643703,LastTimestamp:2025-12-10 15:46:19.545842526 +0000 UTC m=+5.266643703,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.962402 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187fe53175f253e2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:19.547497442 +0000 UTC m=+5.268298639,LastTimestamp:2025-12-10 15:46:19.547497442 +0000 UTC m=+5.268298639,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.967562 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187fe5318213e86e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:19.75102475 +0000 UTC m=+5.471825957,LastTimestamp:2025-12-10 15:46:19.75102475 +0000 UTC m=+5.471825957,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.972040 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187fe53182f2272d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:19.765589805 +0000 UTC m=+5.486390982,LastTimestamp:2025-12-10 15:46:19.765589805 +0000 UTC m=+5.486390982,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.976614 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Dec 10 15:46:33 crc kubenswrapper[5114]: &Event{ObjectMeta:{kube-controller-manager-crc.187fe5319fc2ef77 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Dec 10 15:46:33 crc kubenswrapper[5114]: body: Dec 10 15:46:33 crc kubenswrapper[5114]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:20.249034615 +0000 UTC m=+5.969835822,LastTimestamp:2025-12-10 15:46:20.249034615 +0000 UTC m=+5.969835822,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 10 15:46:33 crc kubenswrapper[5114]: > Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.988936 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187fe5319fc5253e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:20.249179454 +0000 UTC m=+5.969980661,LastTimestamp:2025-12-10 15:46:20.249179454 +0000 UTC m=+5.969980661,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:33 crc kubenswrapper[5114]: E1210 15:46:33.994877 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 10 15:46:33 crc kubenswrapper[5114]: &Event{ObjectMeta:{kube-apiserver-crc.187fe5338a8969de openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 10 15:46:33 crc kubenswrapper[5114]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 10 15:46:33 crc kubenswrapper[5114]: Dec 10 15:46:33 crc kubenswrapper[5114]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:28.482877918 +0000 UTC m=+14.203679135,LastTimestamp:2025-12-10 15:46:28.482877918 +0000 UTC m=+14.203679135,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 10 15:46:33 crc kubenswrapper[5114]: > Dec 10 15:46:34 crc kubenswrapper[5114]: E1210 15:46:34.001664 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187fe5338a8b0867 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:28.482984039 +0000 UTC m=+14.203785246,LastTimestamp:2025-12-10 15:46:28.482984039 +0000 UTC m=+14.203785246,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:34 crc kubenswrapper[5114]: E1210 15:46:34.007700 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187fe5338a8969de\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 10 15:46:34 crc kubenswrapper[5114]: &Event{ObjectMeta:{kube-apiserver-crc.187fe5338a8969de openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 10 15:46:34 crc kubenswrapper[5114]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 10 15:46:34 crc kubenswrapper[5114]: Dec 10 15:46:34 crc kubenswrapper[5114]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:28.482877918 +0000 UTC m=+14.203679135,LastTimestamp:2025-12-10 15:46:28.492823279 +0000 UTC m=+14.213624496,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 10 15:46:34 crc kubenswrapper[5114]: > Dec 10 15:46:34 crc kubenswrapper[5114]: E1210 15:46:34.010061 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187fe5338a8b0867\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187fe5338a8b0867 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:28.482984039 +0000 UTC m=+14.203785246,LastTimestamp:2025-12-10 15:46:28.492925221 +0000 UTC m=+14.213726437,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:34 crc kubenswrapper[5114]: E1210 15:46:34.013584 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.187fe5319fc2ef77\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Dec 10 15:46:34 crc kubenswrapper[5114]: &Event{ObjectMeta:{kube-controller-manager-crc.187fe5319fc2ef77 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Dec 10 15:46:34 crc kubenswrapper[5114]: body: Dec 10 15:46:34 crc kubenswrapper[5114]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:20.249034615 +0000 UTC m=+5.969835822,LastTimestamp:2025-12-10 15:46:30.249555282 +0000 UTC m=+15.970356489,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 10 15:46:34 crc kubenswrapper[5114]: > Dec 10 15:46:34 crc kubenswrapper[5114]: E1210 15:46:34.017977 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.187fe5319fc5253e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187fe5319fc5253e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:20.249179454 +0000 UTC m=+5.969980661,LastTimestamp:2025-12-10 15:46:30.249647554 +0000 UTC m=+15.970448751,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:34 crc kubenswrapper[5114]: E1210 15:46:34.023473 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 10 15:46:34 crc kubenswrapper[5114]: &Event{ObjectMeta:{kube-apiserver-crc.187fe534166c6bcf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Dec 10 15:46:34 crc kubenswrapper[5114]: body: Dec 10 15:46:34 crc kubenswrapper[5114]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:30.829788111 +0000 UTC m=+16.550589288,LastTimestamp:2025-12-10 15:46:30.829788111 +0000 UTC m=+16.550589288,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 10 15:46:34 crc kubenswrapper[5114]: > Dec 10 15:46:34 crc kubenswrapper[5114]: E1210 15:46:34.028820 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187fe534166d5e32 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:30.829850162 +0000 UTC m=+16.550651339,LastTimestamp:2025-12-10 15:46:30.829850162 +0000 UTC m=+16.550651339,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:34 crc kubenswrapper[5114]: E1210 15:46:34.034255 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187fe534166c6bcf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 10 15:46:34 crc kubenswrapper[5114]: &Event{ObjectMeta:{kube-apiserver-crc.187fe534166c6bcf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Dec 10 15:46:34 crc kubenswrapper[5114]: body: Dec 10 15:46:34 crc kubenswrapper[5114]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:30.829788111 +0000 UTC m=+16.550589288,LastTimestamp:2025-12-10 15:46:31.675128802 +0000 UTC m=+17.395929979,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 10 15:46:34 crc kubenswrapper[5114]: > Dec 10 15:46:34 crc kubenswrapper[5114]: E1210 15:46:34.038311 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187fe534166d5e32\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187fe534166d5e32 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:30.829850162 +0000 UTC m=+16.550651339,LastTimestamp:2025-12-10 15:46:31.675202934 +0000 UTC m=+17.396004111,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:34 crc kubenswrapper[5114]: I1210 15:46:34.489500 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:46:34 crc kubenswrapper[5114]: E1210 15:46:34.609538 5114 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 10 15:46:34 crc kubenswrapper[5114]: I1210 15:46:34.683089 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 10 15:46:34 crc kubenswrapper[5114]: I1210 15:46:34.684858 5114 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="d660c46f43ddf7099017beb0aa69f3e5a073829386002b1c17d2d4820d1176b0" exitCode=255 Dec 10 15:46:34 crc kubenswrapper[5114]: I1210 15:46:34.684914 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"d660c46f43ddf7099017beb0aa69f3e5a073829386002b1c17d2d4820d1176b0"} Dec 10 15:46:34 crc kubenswrapper[5114]: I1210 15:46:34.685143 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:34 crc kubenswrapper[5114]: I1210 15:46:34.685835 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:34 crc kubenswrapper[5114]: I1210 15:46:34.685884 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:34 crc kubenswrapper[5114]: I1210 15:46:34.685899 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:34 crc kubenswrapper[5114]: E1210 15:46:34.686354 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:34 crc kubenswrapper[5114]: I1210 15:46:34.686729 5114 scope.go:117] "RemoveContainer" containerID="d660c46f43ddf7099017beb0aa69f3e5a073829386002b1c17d2d4820d1176b0" Dec 10 15:46:34 crc kubenswrapper[5114]: E1210 15:46:34.695331 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187fe530fc4adb5b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187fe530fc4adb5b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:17.506478939 +0000 UTC m=+3.227280116,LastTimestamp:2025-12-10 15:46:34.688580762 +0000 UTC m=+20.409381949,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:34 crc kubenswrapper[5114]: E1210 15:46:34.882629 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187fe53107dcb6a7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187fe53107dcb6a7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:17.700587175 +0000 UTC m=+3.421388352,LastTimestamp:2025-12-10 15:46:34.876590529 +0000 UTC m=+20.597391706,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:34 crc kubenswrapper[5114]: E1210 15:46:34.894173 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187fe5310868d3e5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187fe5310868d3e5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:17.709769701 +0000 UTC m=+3.430570878,LastTimestamp:2025-12-10 15:46:34.888400637 +0000 UTC m=+20.609201824,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:35 crc kubenswrapper[5114]: I1210 15:46:35.488410 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:46:35 crc kubenswrapper[5114]: I1210 15:46:35.689178 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 10 15:46:35 crc kubenswrapper[5114]: I1210 15:46:35.690965 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"454752bddaaaf6c7dd597271d295a130b401680befb3fe3d658c33b89085c3f6"} Dec 10 15:46:35 crc kubenswrapper[5114]: I1210 15:46:35.691224 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:35 crc kubenswrapper[5114]: I1210 15:46:35.691787 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:35 crc kubenswrapper[5114]: I1210 15:46:35.691826 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:35 crc kubenswrapper[5114]: I1210 15:46:35.691840 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:35 crc kubenswrapper[5114]: E1210 15:46:35.692163 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:36 crc kubenswrapper[5114]: I1210 15:46:36.490524 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:46:37 crc kubenswrapper[5114]: E1210 15:46:37.116688 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 10 15:46:37 crc kubenswrapper[5114]: I1210 15:46:37.253505 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:46:37 crc kubenswrapper[5114]: I1210 15:46:37.253751 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:37 crc kubenswrapper[5114]: I1210 15:46:37.254833 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:37 crc kubenswrapper[5114]: I1210 15:46:37.254878 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:37 crc kubenswrapper[5114]: I1210 15:46:37.254903 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:37 crc kubenswrapper[5114]: E1210 15:46:37.255319 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:37 crc kubenswrapper[5114]: I1210 15:46:37.258862 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:46:37 crc kubenswrapper[5114]: I1210 15:46:37.358745 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:37 crc kubenswrapper[5114]: I1210 15:46:37.359977 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:37 crc kubenswrapper[5114]: I1210 15:46:37.360057 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:37 crc kubenswrapper[5114]: I1210 15:46:37.360074 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:37 crc kubenswrapper[5114]: I1210 15:46:37.360107 5114 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 10 15:46:37 crc kubenswrapper[5114]: E1210 15:46:37.373844 5114 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 10 15:46:37 crc kubenswrapper[5114]: I1210 15:46:37.489353 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:46:37 crc kubenswrapper[5114]: I1210 15:46:37.698031 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 10 15:46:37 crc kubenswrapper[5114]: I1210 15:46:37.698942 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 10 15:46:37 crc kubenswrapper[5114]: I1210 15:46:37.702442 5114 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="454752bddaaaf6c7dd597271d295a130b401680befb3fe3d658c33b89085c3f6" exitCode=255 Dec 10 15:46:37 crc kubenswrapper[5114]: I1210 15:46:37.702733 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:37 crc kubenswrapper[5114]: I1210 15:46:37.702703 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"454752bddaaaf6c7dd597271d295a130b401680befb3fe3d658c33b89085c3f6"} Dec 10 15:46:37 crc kubenswrapper[5114]: I1210 15:46:37.703088 5114 scope.go:117] "RemoveContainer" containerID="d660c46f43ddf7099017beb0aa69f3e5a073829386002b1c17d2d4820d1176b0" Dec 10 15:46:37 crc kubenswrapper[5114]: I1210 15:46:37.703393 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:37 crc kubenswrapper[5114]: I1210 15:46:37.703712 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:37 crc kubenswrapper[5114]: I1210 15:46:37.703745 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:37 crc kubenswrapper[5114]: I1210 15:46:37.703757 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:37 crc kubenswrapper[5114]: E1210 15:46:37.704092 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:37 crc kubenswrapper[5114]: I1210 15:46:37.705104 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:37 crc kubenswrapper[5114]: I1210 15:46:37.705159 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:37 crc kubenswrapper[5114]: I1210 15:46:37.705178 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:37 crc kubenswrapper[5114]: E1210 15:46:37.705794 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:37 crc kubenswrapper[5114]: I1210 15:46:37.706237 5114 scope.go:117] "RemoveContainer" containerID="454752bddaaaf6c7dd597271d295a130b401680befb3fe3d658c33b89085c3f6" Dec 10 15:46:37 crc kubenswrapper[5114]: E1210 15:46:37.706633 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 10 15:46:37 crc kubenswrapper[5114]: E1210 15:46:37.718189 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187fe535b04fb3a5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:37.706564517 +0000 UTC m=+23.427365724,LastTimestamp:2025-12-10 15:46:37.706564517 +0000 UTC m=+23.427365724,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:37 crc kubenswrapper[5114]: E1210 15:46:37.969232 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 10 15:46:38 crc kubenswrapper[5114]: I1210 15:46:38.487962 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:46:38 crc kubenswrapper[5114]: I1210 15:46:38.707689 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 10 15:46:38 crc kubenswrapper[5114]: E1210 15:46:38.746125 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 10 15:46:39 crc kubenswrapper[5114]: I1210 15:46:39.491897 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:46:39 crc kubenswrapper[5114]: E1210 15:46:39.818562 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 10 15:46:40 crc kubenswrapper[5114]: I1210 15:46:40.491003 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:46:41 crc kubenswrapper[5114]: I1210 15:46:41.491001 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:46:42 crc kubenswrapper[5114]: I1210 15:46:42.489464 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:46:43 crc kubenswrapper[5114]: I1210 15:46:43.489377 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:46:44 crc kubenswrapper[5114]: E1210 15:46:44.123365 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 10 15:46:44 crc kubenswrapper[5114]: E1210 15:46:44.213326 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 10 15:46:44 crc kubenswrapper[5114]: I1210 15:46:44.374375 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:44 crc kubenswrapper[5114]: I1210 15:46:44.375752 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:44 crc kubenswrapper[5114]: I1210 15:46:44.375860 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:44 crc kubenswrapper[5114]: I1210 15:46:44.375932 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:44 crc kubenswrapper[5114]: I1210 15:46:44.376004 5114 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 10 15:46:44 crc kubenswrapper[5114]: E1210 15:46:44.395333 5114 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 10 15:46:44 crc kubenswrapper[5114]: I1210 15:46:44.491412 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:46:44 crc kubenswrapper[5114]: E1210 15:46:44.609767 5114 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 10 15:46:45 crc kubenswrapper[5114]: I1210 15:46:45.489029 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:46:45 crc kubenswrapper[5114]: I1210 15:46:45.692164 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:46:45 crc kubenswrapper[5114]: I1210 15:46:45.692453 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:45 crc kubenswrapper[5114]: I1210 15:46:45.693376 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:45 crc kubenswrapper[5114]: I1210 15:46:45.693441 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:45 crc kubenswrapper[5114]: I1210 15:46:45.693461 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:45 crc kubenswrapper[5114]: E1210 15:46:45.694065 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:45 crc kubenswrapper[5114]: I1210 15:46:45.694508 5114 scope.go:117] "RemoveContainer" containerID="454752bddaaaf6c7dd597271d295a130b401680befb3fe3d658c33b89085c3f6" Dec 10 15:46:45 crc kubenswrapper[5114]: E1210 15:46:45.694836 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 10 15:46:45 crc kubenswrapper[5114]: E1210 15:46:45.702097 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187fe535b04fb3a5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187fe535b04fb3a5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:37.706564517 +0000 UTC m=+23.427365724,LastTimestamp:2025-12-10 15:46:45.694786023 +0000 UTC m=+31.415587240,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:46 crc kubenswrapper[5114]: E1210 15:46:46.258431 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 10 15:46:46 crc kubenswrapper[5114]: I1210 15:46:46.326944 5114 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:46:46 crc kubenswrapper[5114]: I1210 15:46:46.327220 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:46 crc kubenswrapper[5114]: I1210 15:46:46.328159 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:46 crc kubenswrapper[5114]: I1210 15:46:46.328214 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:46 crc kubenswrapper[5114]: I1210 15:46:46.328233 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:46 crc kubenswrapper[5114]: E1210 15:46:46.328854 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:46 crc kubenswrapper[5114]: I1210 15:46:46.330439 5114 scope.go:117] "RemoveContainer" containerID="454752bddaaaf6c7dd597271d295a130b401680befb3fe3d658c33b89085c3f6" Dec 10 15:46:46 crc kubenswrapper[5114]: E1210 15:46:46.330776 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 10 15:46:46 crc kubenswrapper[5114]: E1210 15:46:46.337074 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187fe535b04fb3a5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187fe535b04fb3a5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:37.706564517 +0000 UTC m=+23.427365724,LastTimestamp:2025-12-10 15:46:46.330728597 +0000 UTC m=+32.051529804,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:46 crc kubenswrapper[5114]: I1210 15:46:46.490356 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:46:46 crc kubenswrapper[5114]: E1210 15:46:46.719448 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 10 15:46:47 crc kubenswrapper[5114]: I1210 15:46:47.484991 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:46:48 crc kubenswrapper[5114]: I1210 15:46:48.489033 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:46:49 crc kubenswrapper[5114]: I1210 15:46:49.488678 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:46:50 crc kubenswrapper[5114]: I1210 15:46:50.486638 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:46:51 crc kubenswrapper[5114]: E1210 15:46:51.128904 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 10 15:46:51 crc kubenswrapper[5114]: I1210 15:46:51.396560 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:51 crc kubenswrapper[5114]: I1210 15:46:51.397771 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:51 crc kubenswrapper[5114]: I1210 15:46:51.397843 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:51 crc kubenswrapper[5114]: I1210 15:46:51.397875 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:51 crc kubenswrapper[5114]: I1210 15:46:51.397917 5114 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 10 15:46:51 crc kubenswrapper[5114]: E1210 15:46:51.411755 5114 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 10 15:46:51 crc kubenswrapper[5114]: I1210 15:46:51.489231 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:46:52 crc kubenswrapper[5114]: I1210 15:46:52.490103 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:46:53 crc kubenswrapper[5114]: E1210 15:46:53.216140 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 10 15:46:53 crc kubenswrapper[5114]: I1210 15:46:53.489417 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:46:54 crc kubenswrapper[5114]: I1210 15:46:54.489580 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:46:54 crc kubenswrapper[5114]: E1210 15:46:54.610995 5114 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 10 15:46:55 crc kubenswrapper[5114]: I1210 15:46:55.494382 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:46:56 crc kubenswrapper[5114]: I1210 15:46:56.492635 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:46:57 crc kubenswrapper[5114]: I1210 15:46:57.493349 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:46:57 crc kubenswrapper[5114]: E1210 15:46:57.767116 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 10 15:46:58 crc kubenswrapper[5114]: E1210 15:46:58.134839 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 10 15:46:58 crc kubenswrapper[5114]: I1210 15:46:58.412245 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:58 crc kubenswrapper[5114]: I1210 15:46:58.413440 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:58 crc kubenswrapper[5114]: I1210 15:46:58.413704 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:58 crc kubenswrapper[5114]: I1210 15:46:58.413955 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:58 crc kubenswrapper[5114]: I1210 15:46:58.414252 5114 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 10 15:46:58 crc kubenswrapper[5114]: E1210 15:46:58.430111 5114 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 10 15:46:58 crc kubenswrapper[5114]: I1210 15:46:58.494459 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:46:58 crc kubenswrapper[5114]: I1210 15:46:58.568566 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:58 crc kubenswrapper[5114]: I1210 15:46:58.569387 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:58 crc kubenswrapper[5114]: I1210 15:46:58.569432 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:58 crc kubenswrapper[5114]: I1210 15:46:58.569448 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:58 crc kubenswrapper[5114]: E1210 15:46:58.569810 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:46:58 crc kubenswrapper[5114]: I1210 15:46:58.570108 5114 scope.go:117] "RemoveContainer" containerID="454752bddaaaf6c7dd597271d295a130b401680befb3fe3d658c33b89085c3f6" Dec 10 15:46:58 crc kubenswrapper[5114]: E1210 15:46:58.578974 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187fe530fc4adb5b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187fe530fc4adb5b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:17.506478939 +0000 UTC m=+3.227280116,LastTimestamp:2025-12-10 15:46:58.571491024 +0000 UTC m=+44.292292211,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:58 crc kubenswrapper[5114]: E1210 15:46:58.786007 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187fe53107dcb6a7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187fe53107dcb6a7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:17.700587175 +0000 UTC m=+3.421388352,LastTimestamp:2025-12-10 15:46:58.7804869 +0000 UTC m=+44.501288087,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:58 crc kubenswrapper[5114]: E1210 15:46:58.796444 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187fe5310868d3e5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187fe5310868d3e5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:17.709769701 +0000 UTC m=+3.430570878,LastTimestamp:2025-12-10 15:46:58.792164435 +0000 UTC m=+44.512965612,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:46:59 crc kubenswrapper[5114]: I1210 15:46:59.489883 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:46:59 crc kubenswrapper[5114]: I1210 15:46:59.762640 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 10 15:46:59 crc kubenswrapper[5114]: I1210 15:46:59.765147 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"e1c010c37667d5c045e43048e4405a03d43afd6ebe7774038d9d5a5c5bb8aaf4"} Dec 10 15:46:59 crc kubenswrapper[5114]: I1210 15:46:59.765487 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:46:59 crc kubenswrapper[5114]: I1210 15:46:59.766474 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:46:59 crc kubenswrapper[5114]: I1210 15:46:59.766526 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:46:59 crc kubenswrapper[5114]: I1210 15:46:59.766538 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:46:59 crc kubenswrapper[5114]: E1210 15:46:59.766884 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:47:00 crc kubenswrapper[5114]: I1210 15:47:00.489742 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:47:00 crc kubenswrapper[5114]: I1210 15:47:00.769343 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 10 15:47:00 crc kubenswrapper[5114]: I1210 15:47:00.769794 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 10 15:47:00 crc kubenswrapper[5114]: I1210 15:47:00.771574 5114 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="e1c010c37667d5c045e43048e4405a03d43afd6ebe7774038d9d5a5c5bb8aaf4" exitCode=255 Dec 10 15:47:00 crc kubenswrapper[5114]: I1210 15:47:00.771624 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"e1c010c37667d5c045e43048e4405a03d43afd6ebe7774038d9d5a5c5bb8aaf4"} Dec 10 15:47:00 crc kubenswrapper[5114]: I1210 15:47:00.771668 5114 scope.go:117] "RemoveContainer" containerID="454752bddaaaf6c7dd597271d295a130b401680befb3fe3d658c33b89085c3f6" Dec 10 15:47:00 crc kubenswrapper[5114]: I1210 15:47:00.771831 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:47:00 crc kubenswrapper[5114]: I1210 15:47:00.772502 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:00 crc kubenswrapper[5114]: I1210 15:47:00.772534 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:00 crc kubenswrapper[5114]: I1210 15:47:00.772543 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:00 crc kubenswrapper[5114]: E1210 15:47:00.772829 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:47:00 crc kubenswrapper[5114]: I1210 15:47:00.773102 5114 scope.go:117] "RemoveContainer" containerID="e1c010c37667d5c045e43048e4405a03d43afd6ebe7774038d9d5a5c5bb8aaf4" Dec 10 15:47:00 crc kubenswrapper[5114]: E1210 15:47:00.773325 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 10 15:47:00 crc kubenswrapper[5114]: E1210 15:47:00.779385 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187fe535b04fb3a5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187fe535b04fb3a5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:37.706564517 +0000 UTC m=+23.427365724,LastTimestamp:2025-12-10 15:47:00.773297982 +0000 UTC m=+46.494099159,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:47:01 crc kubenswrapper[5114]: E1210 15:47:01.093687 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 10 15:47:01 crc kubenswrapper[5114]: I1210 15:47:01.490288 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:47:01 crc kubenswrapper[5114]: I1210 15:47:01.775459 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 10 15:47:02 crc kubenswrapper[5114]: I1210 15:47:02.492544 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:47:03 crc kubenswrapper[5114]: I1210 15:47:03.491599 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:47:04 crc kubenswrapper[5114]: I1210 15:47:04.490928 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:47:04 crc kubenswrapper[5114]: E1210 15:47:04.611727 5114 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 10 15:47:05 crc kubenswrapper[5114]: E1210 15:47:05.144379 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 10 15:47:05 crc kubenswrapper[5114]: I1210 15:47:05.431008 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:47:05 crc kubenswrapper[5114]: I1210 15:47:05.433249 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:05 crc kubenswrapper[5114]: I1210 15:47:05.433394 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:05 crc kubenswrapper[5114]: I1210 15:47:05.433417 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:05 crc kubenswrapper[5114]: I1210 15:47:05.433501 5114 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 10 15:47:05 crc kubenswrapper[5114]: E1210 15:47:05.447643 5114 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 10 15:47:05 crc kubenswrapper[5114]: I1210 15:47:05.491788 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:47:06 crc kubenswrapper[5114]: E1210 15:47:06.310531 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 10 15:47:06 crc kubenswrapper[5114]: I1210 15:47:06.326881 5114 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:47:06 crc kubenswrapper[5114]: I1210 15:47:06.327169 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:47:06 crc kubenswrapper[5114]: I1210 15:47:06.328088 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:06 crc kubenswrapper[5114]: I1210 15:47:06.328144 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:06 crc kubenswrapper[5114]: I1210 15:47:06.328163 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:06 crc kubenswrapper[5114]: E1210 15:47:06.328806 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:47:06 crc kubenswrapper[5114]: I1210 15:47:06.329368 5114 scope.go:117] "RemoveContainer" containerID="e1c010c37667d5c045e43048e4405a03d43afd6ebe7774038d9d5a5c5bb8aaf4" Dec 10 15:47:06 crc kubenswrapper[5114]: E1210 15:47:06.329753 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 10 15:47:06 crc kubenswrapper[5114]: E1210 15:47:06.336034 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187fe535b04fb3a5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187fe535b04fb3a5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:37.706564517 +0000 UTC m=+23.427365724,LastTimestamp:2025-12-10 15:47:06.329712992 +0000 UTC m=+52.050514209,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:47:06 crc kubenswrapper[5114]: I1210 15:47:06.486816 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:47:07 crc kubenswrapper[5114]: I1210 15:47:07.490613 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:47:08 crc kubenswrapper[5114]: I1210 15:47:08.489165 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:47:09 crc kubenswrapper[5114]: I1210 15:47:09.258081 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 15:47:09 crc kubenswrapper[5114]: I1210 15:47:09.258369 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:47:09 crc kubenswrapper[5114]: I1210 15:47:09.259224 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:09 crc kubenswrapper[5114]: I1210 15:47:09.259324 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:09 crc kubenswrapper[5114]: I1210 15:47:09.259351 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:09 crc kubenswrapper[5114]: E1210 15:47:09.259797 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:47:09 crc kubenswrapper[5114]: I1210 15:47:09.489658 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:47:09 crc kubenswrapper[5114]: I1210 15:47:09.766100 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:47:09 crc kubenswrapper[5114]: I1210 15:47:09.766562 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:47:09 crc kubenswrapper[5114]: I1210 15:47:09.767588 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:09 crc kubenswrapper[5114]: I1210 15:47:09.767638 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:09 crc kubenswrapper[5114]: I1210 15:47:09.767649 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:09 crc kubenswrapper[5114]: E1210 15:47:09.767996 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 10 15:47:09 crc kubenswrapper[5114]: I1210 15:47:09.768269 5114 scope.go:117] "RemoveContainer" containerID="e1c010c37667d5c045e43048e4405a03d43afd6ebe7774038d9d5a5c5bb8aaf4" Dec 10 15:47:09 crc kubenswrapper[5114]: E1210 15:47:09.768495 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 10 15:47:09 crc kubenswrapper[5114]: E1210 15:47:09.773940 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187fe535b04fb3a5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187fe535b04fb3a5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:46:37.706564517 +0000 UTC m=+23.427365724,LastTimestamp:2025-12-10 15:47:09.768463789 +0000 UTC m=+55.489264966,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:47:10 crc kubenswrapper[5114]: I1210 15:47:10.491759 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:47:11 crc kubenswrapper[5114]: I1210 15:47:11.490112 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:47:12 crc kubenswrapper[5114]: E1210 15:47:12.152785 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 10 15:47:12 crc kubenswrapper[5114]: I1210 15:47:12.448820 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:47:12 crc kubenswrapper[5114]: I1210 15:47:12.450003 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:12 crc kubenswrapper[5114]: I1210 15:47:12.450055 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:12 crc kubenswrapper[5114]: I1210 15:47:12.450128 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:12 crc kubenswrapper[5114]: I1210 15:47:12.450159 5114 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 10 15:47:12 crc kubenswrapper[5114]: E1210 15:47:12.463335 5114 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 10 15:47:12 crc kubenswrapper[5114]: I1210 15:47:12.489051 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:47:13 crc kubenswrapper[5114]: I1210 15:47:13.490360 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:47:14 crc kubenswrapper[5114]: I1210 15:47:14.487849 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:47:14 crc kubenswrapper[5114]: E1210 15:47:14.612331 5114 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 10 15:47:15 crc kubenswrapper[5114]: I1210 15:47:15.491095 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:47:16 crc kubenswrapper[5114]: I1210 15:47:16.489970 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:47:17 crc kubenswrapper[5114]: I1210 15:47:17.493450 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:47:18 crc kubenswrapper[5114]: I1210 15:47:18.489680 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:47:19 crc kubenswrapper[5114]: E1210 15:47:19.158815 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 10 15:47:19 crc kubenswrapper[5114]: I1210 15:47:19.463655 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:47:19 crc kubenswrapper[5114]: I1210 15:47:19.464507 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:19 crc kubenswrapper[5114]: I1210 15:47:19.464586 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:19 crc kubenswrapper[5114]: I1210 15:47:19.464602 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:19 crc kubenswrapper[5114]: I1210 15:47:19.464628 5114 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 10 15:47:19 crc kubenswrapper[5114]: E1210 15:47:19.473602 5114 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 10 15:47:19 crc kubenswrapper[5114]: I1210 15:47:19.488000 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 10 15:47:19 crc kubenswrapper[5114]: I1210 15:47:19.553625 5114 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-kc5nj" Dec 10 15:47:19 crc kubenswrapper[5114]: I1210 15:47:19.560056 5114 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-kc5nj" Dec 10 15:47:19 crc kubenswrapper[5114]: I1210 15:47:19.611977 5114 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 10 15:47:20 crc kubenswrapper[5114]: I1210 15:47:20.412122 5114 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 10 15:47:20 crc kubenswrapper[5114]: I1210 15:47:20.561748 5114 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-01-09 15:42:19 +0000 UTC" deadline="2026-01-04 04:16:56.426364783 +0000 UTC" Dec 10 15:47:20 crc kubenswrapper[5114]: I1210 15:47:20.561841 5114 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="588h29m35.864533247s" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.059631 5114 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.099764 5114 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.109644 5114 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.211787 5114 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.309212 5114 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.409489 5114 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.511772 5114 apiserver.go:52] "Watching apiserver" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.523159 5114 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.523717 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-network-node-identity/network-node-identity-dgvkt","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-etcd/etcd-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-machine-config-operator/machine-config-daemon-pvhhc","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-ovn-kubernetes/ovnkube-node-bgfnl","openshift-dns/node-resolver-49rgv","openshift-kube-apiserver/kube-apiserver-crc","openshift-multus/multus-lg6m5","openshift-multus/network-metrics-daemon-gjs2g","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-operator/iptables-alerter-5jnd7","openshift-image-registry/node-ca-sg27x","openshift-multus/multus-additional-cni-plugins-wbl48","openshift-network-diagnostics/network-check-target-fhkjl","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj"] Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.525239 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.525687 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.525766 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.525858 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.525966 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.526543 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.527092 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.527196 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.527286 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.527654 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.527720 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.528685 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.529030 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.529257 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.529617 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.529828 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.529981 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.530410 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.543940 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.548218 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.548346 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjs2g" podUID="48d8f4a9-0b40-486c-ac70-597d1fab05c1" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.556614 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.557016 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.560091 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.560105 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.560132 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.560143 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.560135 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.561883 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.563967 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.564226 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.564674 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.564709 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.564740 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.564766 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.564997 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.566944 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-49rgv" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.567017 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.570627 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.570854 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.570946 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.570974 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.571007 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.571329 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-sg27x" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.571056 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.571107 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.571138 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.571107 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.572917 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.573000 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.573821 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.573906 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.577042 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-wbl48" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.578043 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.579577 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.579578 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.580262 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.582732 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.583413 5114 scope.go:117] "RemoveContainer" containerID="e1c010c37667d5c045e43048e4405a03d43afd6ebe7774038d9d5a5c5bb8aaf4" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.584437 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.585045 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.589177 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.599552 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.602920 5114 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.609115 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.617479 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.625973 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.626151 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.626256 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.626419 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.626516 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.626610 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.626698 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.626789 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.626884 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.626975 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.627062 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.627154 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.627244 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.627362 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.627451 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.627545 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.627636 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.627723 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.627827 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.627921 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.628020 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.628114 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.628214 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.628379 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.629693 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.629800 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.629974 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.630081 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.630176 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.630255 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.630358 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.631406 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.631525 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.631631 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.631739 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.631839 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.631944 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.632050 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.632143 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.632240 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.632382 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.632492 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.632612 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.632719 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.632823 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.632931 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.633042 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.633156 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.633258 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.634108 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.634228 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.634358 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.634451 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.635979 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.636007 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.636059 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.636085 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.636107 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.631668 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bgfnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.636126 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.631169 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.636785 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.631306 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.631721 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.631826 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.631843 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.632176 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.632188 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.632247 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.632480 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.632565 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.632578 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.632658 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.632735 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.632882 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.632924 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.632947 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.633285 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.636878 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.636808 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.636938 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.636960 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.636979 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.636995 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637015 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637032 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637050 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637069 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637092 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637113 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637138 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637156 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637171 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637190 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637210 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637227 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637246 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637302 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637321 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637342 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637365 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637382 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637411 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637429 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637445 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637462 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637480 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637500 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637518 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637534 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637551 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637570 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637588 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637607 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637626 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637645 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637696 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637715 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637732 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637750 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637796 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637830 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637849 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637868 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637887 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637904 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637921 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637939 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637959 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637976 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637999 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.638019 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.638247 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.638319 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.638342 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.638364 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.638385 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.638405 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.638425 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.638447 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.633644 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.638467 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.633735 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.633832 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.633882 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.633904 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.634583 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.634649 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.634891 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.635009 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.635030 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.638710 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.638833 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.639024 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.639029 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.635143 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.635317 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.635443 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.635489 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.639166 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.639205 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.639217 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.639443 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.639488 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.639677 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.639848 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.640022 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.640166 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.640207 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.640328 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.640242 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.635443 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.635726 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.640355 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.636066 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.640602 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.640603 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.640733 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.640967 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.641107 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.641114 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.641139 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.641440 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.641656 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.641672 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.641953 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.636013 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.636545 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.636700 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.636720 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.636838 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.633357 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637037 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637165 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637176 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637499 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637579 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637583 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637592 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637593 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637610 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637715 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.638488 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.643901 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.643922 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.643940 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.643955 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.643971 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.644034 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.644051 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.644068 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.644083 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.644099 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.644115 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.644132 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.644148 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.644163 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.644183 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.644199 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.644214 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.644230 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.644245 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.644262 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.644309 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.644328 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.644342 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.644357 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.644374 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.644390 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.644411 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.644445 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.644464 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.644479 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.644495 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645347 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645371 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645389 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645405 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645419 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645438 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645455 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645470 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645487 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645502 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645519 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645537 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645554 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645572 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645592 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645612 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645628 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645646 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645665 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645682 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645699 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645716 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645734 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645750 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645768 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645785 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645803 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645821 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645865 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645884 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645904 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645921 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645938 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645958 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646165 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646203 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646235 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646263 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646308 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646337 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646364 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646391 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646423 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646451 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646480 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646508 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646535 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646566 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646593 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646632 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646660 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646690 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646719 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646746 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646773 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646802 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646828 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646854 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646926 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646956 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646982 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.647011 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.647104 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637904 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.637987 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.638120 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.638221 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.638442 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.638448 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.638456 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.639037 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.635038 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.635389 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.635945 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.642136 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.642160 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.642221 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.642360 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.642418 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.642501 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.642677 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.642719 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.642910 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.643084 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.643090 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.643174 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.635433 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.643411 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.636092 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.655607 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.643401 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.643463 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.643503 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.643519 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.643548 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.643721 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.643760 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.643750 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.644125 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.644499 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.655749 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.644790 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.644981 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645189 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645712 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645748 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645786 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.645822 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646079 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.655817 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646105 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646110 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646600 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646668 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646731 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646751 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646742 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646601 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.646913 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.647079 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.647193 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:47:22.147172847 +0000 UTC m=+67.867974024 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.647441 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.647948 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.647962 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.648006 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.648047 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.648352 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.648409 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.648687 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.648704 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.648724 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.648910 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.648996 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.649137 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.649157 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.649470 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.649612 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.649617 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.649674 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.649900 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.650025 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.650122 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.650386 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.650417 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.650477 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.650483 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.650805 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.651037 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.651224 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.651236 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.651457 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.651683 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.651956 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.652190 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.652233 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.652251 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.652837 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.652916 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.652940 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.653048 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.653159 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.653188 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.653547 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.653577 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.653893 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.653904 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.654008 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.654139 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.654565 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.654591 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.654654 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.654870 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.655049 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.655313 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.655338 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.644747 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.656256 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.656612 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.656846 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.656886 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.656905 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.656917 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.656953 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.656983 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.657034 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.657066 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.657132 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.657367 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.657554 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.658002 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.658152 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.658512 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.658682 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.659439 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.657140 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-host-run-k8s-cni-cncf-io\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.659467 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.659492 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-host-run-netns\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.659518 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-run-openvswitch\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.659535 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfxbp\" (UniqueName: \"kubernetes.io/projected/e7c683ba-536f-45e5-89b0-fe14989cad13-kube-api-access-sfxbp\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.659552 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-run-ovn\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.659567 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-run-ovn-kubernetes\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.659577 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.659592 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.659584 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-ovnkube-config\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.659696 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgklm\" (UniqueName: \"kubernetes.io/projected/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-kube-api-access-xgklm\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.659721 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.659724 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl62h\" (UniqueName: \"kubernetes.io/projected/a54715ec-382b-4bb8-bef2-f125ee0bae2b-kube-api-access-xl62h\") pod \"node-ca-sg27x\" (UID: \"a54715ec-382b-4bb8-bef2-f125ee0bae2b\") " pod="openshift-image-registry/node-ca-sg27x" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.659762 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3a3e165c-439d-4282-b1e7-179dca439343-os-release\") pod \"multus-additional-cni-plugins-wbl48\" (UID: \"3a3e165c-439d-4282-b1e7-179dca439343\") " pod="openshift-multus/multus-additional-cni-plugins-wbl48" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.659788 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/3a3e165c-439d-4282-b1e7-179dca439343-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-wbl48\" (UID: \"3a3e165c-439d-4282-b1e7-179dca439343\") " pod="openshift-multus/multus-additional-cni-plugins-wbl48" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.659819 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.659845 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a54715ec-382b-4bb8-bef2-f125ee0bae2b-serviceca\") pod \"node-ca-sg27x\" (UID: \"a54715ec-382b-4bb8-bef2-f125ee0bae2b\") " pod="openshift-image-registry/node-ca-sg27x" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.659870 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.659895 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.659916 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.659936 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b38ac556-07b2-4e25-9595-6adae4fcecb7-rootfs\") pod \"machine-config-daemon-pvhhc\" (UID: \"b38ac556-07b2-4e25-9595-6adae4fcecb7\") " pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.659949 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.659965 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e7c683ba-536f-45e5-89b0-fe14989cad13-cni-binary-copy\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.659987 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-host-var-lib-cni-multus\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.660011 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.660034 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g9ft\" (UniqueName: \"kubernetes.io/projected/b38ac556-07b2-4e25-9595-6adae4fcecb7-kube-api-access-8g9ft\") pod \"machine-config-daemon-pvhhc\" (UID: \"b38ac556-07b2-4e25-9595-6adae4fcecb7\") " pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.660055 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-multus-conf-dir\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.660107 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkm4v\" (UniqueName: \"kubernetes.io/projected/89d5aad2-7968-4ff9-a9fa-50a133a77df8-kube-api-access-zkm4v\") pod \"ovnkube-control-plane-57b78d8988-79jfj\" (UID: \"89d5aad2-7968-4ff9-a9fa-50a133a77df8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.660139 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-kubelet\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.660160 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-cni-netd\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.660183 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3a3e165c-439d-4282-b1e7-179dca439343-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wbl48\" (UID: \"3a3e165c-439d-4282-b1e7-179dca439343\") " pod="openshift-multus/multus-additional-cni-plugins-wbl48" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.660208 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.660220 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.660234 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.660309 5114 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.660391 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-10 15:47:22.16037431 +0000 UTC m=+67.881175497 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.660069 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.660768 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.660877 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.660968 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.661001 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b38ac556-07b2-4e25-9595-6adae4fcecb7-mcd-auth-proxy-config\") pod \"machine-config-daemon-pvhhc\" (UID: \"b38ac556-07b2-4e25-9595-6adae4fcecb7\") " pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.661032 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-run-netns\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.661050 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-cni-bin\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.661067 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-ovnkube-script-lib\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.661085 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2wz8\" (UniqueName: \"kubernetes.io/projected/379e5b28-21b4-4727-a60f-0fad71bf89fa-kube-api-access-j2wz8\") pod \"node-resolver-49rgv\" (UID: \"379e5b28-21b4-4727-a60f-0fad71bf89fa\") " pod="openshift-dns/node-resolver-49rgv" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.661102 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-os-release\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.661117 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e7c683ba-536f-45e5-89b0-fe14989cad13-multus-daemon-config\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.661117 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.661136 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-ovn-node-metrics-cert\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.661154 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtlfr\" (UniqueName: \"kubernetes.io/projected/48d8f4a9-0b40-486c-ac70-597d1fab05c1-kube-api-access-wtlfr\") pod \"network-metrics-daemon-gjs2g\" (UID: \"48d8f4a9-0b40-486c-ac70-597d1fab05c1\") " pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.661172 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3a3e165c-439d-4282-b1e7-179dca439343-system-cni-dir\") pod \"multus-additional-cni-plugins-wbl48\" (UID: \"3a3e165c-439d-4282-b1e7-179dca439343\") " pod="openshift-multus/multus-additional-cni-plugins-wbl48" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.661188 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-multus-cni-dir\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.661208 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-systemd-units\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.661227 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.661266 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9xxc\" (UniqueName: \"kubernetes.io/projected/3a3e165c-439d-4282-b1e7-179dca439343-kube-api-access-j9xxc\") pod \"multus-additional-cni-plugins-wbl48\" (UID: \"3a3e165c-439d-4282-b1e7-179dca439343\") " pod="openshift-multus/multus-additional-cni-plugins-wbl48" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.661303 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-cnibin\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.661320 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-host-run-multus-certs\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.661338 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/89d5aad2-7968-4ff9-a9fa-50a133a77df8-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-79jfj\" (UID: \"89d5aad2-7968-4ff9-a9fa-50a133a77df8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.661354 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.661370 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-run-systemd\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.661386 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-var-lib-openvswitch\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.661401 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-etc-openvswitch\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.661406 5114 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.661417 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-hostroot\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.661408 5114 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.661455 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-10 15:47:22.161445688 +0000 UTC m=+67.882246865 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.661712 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.661439 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/89d5aad2-7968-4ff9-a9fa-50a133a77df8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-79jfj\" (UID: \"89d5aad2-7968-4ff9-a9fa-50a133a77df8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.662109 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-multus-socket-dir-parent\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.662127 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-host-var-lib-cni-bin\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.662144 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-etc-kubernetes\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.662159 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3a3e165c-439d-4282-b1e7-179dca439343-cnibin\") pod \"multus-additional-cni-plugins-wbl48\" (UID: \"3a3e165c-439d-4282-b1e7-179dca439343\") " pod="openshift-multus/multus-additional-cni-plugins-wbl48" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.662174 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3a3e165c-439d-4282-b1e7-179dca439343-cni-binary-copy\") pod \"multus-additional-cni-plugins-wbl48\" (UID: \"3a3e165c-439d-4282-b1e7-179dca439343\") " pod="openshift-multus/multus-additional-cni-plugins-wbl48" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.662192 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.662209 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-env-overrides\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.662225 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48d8f4a9-0b40-486c-ac70-597d1fab05c1-metrics-certs\") pod \"network-metrics-daemon-gjs2g\" (UID: \"48d8f4a9-0b40-486c-ac70-597d1fab05c1\") " pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.662240 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/379e5b28-21b4-4727-a60f-0fad71bf89fa-tmp-dir\") pod \"node-resolver-49rgv\" (UID: \"379e5b28-21b4-4727-a60f-0fad71bf89fa\") " pod="openshift-dns/node-resolver-49rgv" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.662256 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-host-var-lib-kubelet\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.662285 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.662303 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-log-socket\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.662318 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a54715ec-382b-4bb8-bef2-f125ee0bae2b-host\") pod \"node-ca-sg27x\" (UID: \"a54715ec-382b-4bb8-bef2-f125ee0bae2b\") " pod="openshift-image-registry/node-ca-sg27x" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.662334 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-system-cni-dir\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.662350 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/89d5aad2-7968-4ff9-a9fa-50a133a77df8-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-79jfj\" (UID: \"89d5aad2-7968-4ff9-a9fa-50a133a77df8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.662367 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.662386 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.662421 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.662441 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.662457 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/379e5b28-21b4-4727-a60f-0fad71bf89fa-hosts-file\") pod \"node-resolver-49rgv\" (UID: \"379e5b28-21b4-4727-a60f-0fad71bf89fa\") " pod="openshift-dns/node-resolver-49rgv" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.662547 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.662478 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-node-log\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.662848 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-slash\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.662864 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3a3e165c-439d-4282-b1e7-179dca439343-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wbl48\" (UID: \"3a3e165c-439d-4282-b1e7-179dca439343\") " pod="openshift-multus/multus-additional-cni-plugins-wbl48" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.662880 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b38ac556-07b2-4e25-9595-6adae4fcecb7-proxy-tls\") pod \"machine-config-daemon-pvhhc\" (UID: \"b38ac556-07b2-4e25-9595-6adae4fcecb7\") " pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.662970 5114 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.662980 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.662989 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.662997 5114 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.663005 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.663014 5114 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.663023 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.663032 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.663041 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.663050 5114 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.663059 5114 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.663068 5114 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.663076 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.663231 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.663243 5114 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.663251 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.663260 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.663330 5114 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.663342 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.664000 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.664014 5114 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.664023 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.664032 5114 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.664040 5114 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.664049 5114 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.664057 5114 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.664065 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.664073 5114 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.664083 5114 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.664094 5114 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.664103 5114 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.664114 5114 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.664125 5114 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.664134 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.664144 5114 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.664152 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.664265 5114 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.664289 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.664302 5114 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.664329 5114 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.664338 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.664346 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.664355 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.664363 5114 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.664371 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.664380 5114 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.664387 5114 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.664396 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.664405 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.664439 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.664450 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.665053 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.665390 5114 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.665408 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.665418 5114 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.665430 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.665440 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.665449 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.665459 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.665469 5114 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.665478 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.665487 5114 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.665496 5114 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.665504 5114 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.665513 5114 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.665522 5114 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.665531 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.665539 5114 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.665551 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.665560 5114 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.665570 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.665578 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.665587 5114 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.665595 5114 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.665605 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.665613 5114 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.665622 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.665630 5114 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.665639 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.665648 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.665656 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.665664 5114 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.665911 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.665673 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666103 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666116 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666126 5114 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666134 5114 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666582 5114 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666592 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666601 5114 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666609 5114 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666618 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666628 5114 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666637 5114 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666646 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666655 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666664 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666672 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666681 5114 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666691 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666699 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666707 5114 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666715 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666761 5114 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666770 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666780 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666789 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666798 5114 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666806 5114 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666851 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666863 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666873 5114 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666882 5114 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666911 5114 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666919 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666927 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666937 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666945 5114 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666975 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666985 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.666993 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667003 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667030 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667039 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667047 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667058 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667073 5114 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667108 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667116 5114 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667124 5114 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667132 5114 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667140 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667148 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667157 5114 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667164 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667172 5114 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667181 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667189 5114 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667197 5114 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667205 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667216 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667224 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667234 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667242 5114 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667251 5114 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667259 5114 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667277 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667180 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4f07611-baa7-42a7-8607-306ed57fb75c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://800d1520c7107344f8b6d771d0fecfb9ca2644d8efe597cabd69c5de72a571ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ec7a41d072aa02f59def36f4c2802872ef70cbd48046c3e3d6f6ccd6b254c53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c19e0260e8980b12b59f394a8355cee2eee1dc159e14081a0ff23cebdd4e9f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1daca1262ac174a242cff74011ab4da1c00a8caaf4bc44b58af5400ae24d3226\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667288 5114 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667509 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667519 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667530 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667652 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667671 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667681 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667689 5114 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667699 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667729 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667739 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667752 5114 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667761 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667770 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667778 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667786 5114 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667795 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667803 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667811 5114 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667819 5114 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667828 5114 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667836 5114 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667844 5114 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667853 5114 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667862 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667869 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667879 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667890 5114 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667898 5114 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667906 5114 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667914 5114 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667923 5114 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667934 5114 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667942 5114 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667950 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667957 5114 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667967 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667976 5114 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667984 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667992 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.667999 5114 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.668008 5114 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.668016 5114 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.668024 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.668032 5114 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.668040 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.668049 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.668057 5114 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.668066 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.668076 5114 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.668084 5114 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.668092 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.668100 5114 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.668109 5114 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.668120 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.668130 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.668140 5114 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.668151 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.668163 5114 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.668175 5114 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.668184 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.668193 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.668203 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.668212 5114 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.668222 5114 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.668231 5114 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.668240 5114 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.668249 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.668261 5114 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.668297 5114 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.670755 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.676994 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.678779 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.678859 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.678877 5114 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.678940 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.678960 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-10 15:47:22.178940918 +0000 UTC m=+67.899742095 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.679012 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.679147 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.679162 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.679174 5114 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.679232 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-10 15:47:22.179216265 +0000 UTC m=+67.900017442 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.682224 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.685851 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.685973 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14d2b4c9-40f0-4dcb-ad8c-0fe4a5304563\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://85e77e659fccf9ba6e2cc6e99afbafd6be1703e401429ba871243247e0c20a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://447746eb6e190728d80f154f34d6c4c3cd6a364d95c18a4c109e1a2d00fbcf27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://251a7ed18067c8bcbcbcb38700fe905a2a4ebf34fef9f02a6ffc9f78a334bc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43234809c1296bc87d3909492e145b0720e62cf92728f1f24baeac176f8cfc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://4654b1e58183f9508823b58dc37a09482feafd97c887cc56f9d1c793999ee516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://101e3958feb79a37918d043f01289b15aa43519052915151289b2df11a4c798e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://101e3958feb79a37918d043f01289b15aa43519052915151289b2df11a4c798e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://000c0ac3fe264d2edae20d00ae4b904a9c638f104925be4c2999a32625c2384e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://000c0ac3fe264d2edae20d00ae4b904a9c638f104925be4c2999a32625c2384e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://90da8daaae30e60295160aefe8748f6cf28eda2cd17d933569c0320aebc57f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90da8daaae30e60295160aefe8748f6cf28eda2cd17d933569c0320aebc57f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.688092 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.689802 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.691120 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.703948 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e331166d-a33f-44c1-9a3e-f43cfee598a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://c9a7475ba48862dfcb11fe65264384be264b4b7acd30761bc650e70dd3a78abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7398b71862f7cfabefc5644c5d6b4924bbde47edadad7f240aa37599d2b3da9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://55ad03eb1a337191c414a5dbd0864a29632396ff234b68505a9a4b65c90d8eb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e1c010c37667d5c045e43048e4405a03d43afd6ebe7774038d9d5a5c5bb8aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1c010c37667d5c045e43048e4405a03d43afd6ebe7774038d9d5a5c5bb8aaf4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T15:47:00Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1210 15:46:59.465586 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 15:46:59.465755 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1210 15:46:59.466800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823188907/tls.crt::/tmp/serving-cert-3823188907/tls.key\\\\\\\"\\\\nI1210 15:47:00.080067 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 15:47:00.081594 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 15:47:00.081609 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 15:47:00.081631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 15:47:00.081635 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 15:47:00.084952 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 15:47:00.084970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 15:47:00.084974 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 15:47:00.084979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 15:47:00.084982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 15:47:00.084984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 15:47:00.084987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 15:47:00.085095 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 15:47:00.088454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f8dd78b836cacc6ac7bee1a11730500c94192df5a045eb37ae1c137a3cc0ad6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7e3d3b6b0e188659783d2b384d22a05ba8962e4fa49cd4caae040921c9add613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e3d3b6b0e188659783d2b384d22a05ba8962e4fa49cd4caae040921c9add613\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.705225 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.705751 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.709102 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.712453 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.720290 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.729455 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.739858 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbl48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3e165c-439d-4282-b1e7-179dca439343\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbl48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.749314 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cddacc92-81b7-4948-93c5-5c47e15a9d41\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://82cf7cb8d12a0390623c03e2a919f8f30da8ac13d60bbaaca7bd32778e9816e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8822b68284631476f7526c5a6629b3cbe113320b8716837d4be7ed679ea64b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d65e5ca10eda1aed2b331dff87ea726c9ba50cfbb47bf07c74e0ce4d6d5b99b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bf99e2dd5c01828fb3db803c3d59c571d32f320bec0325579c1510965bea01ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf99e2dd5c01828fb3db803c3d59c571d32f320bec0325579c1510965bea01ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.758346 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.765613 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b38ac556-07b2-4e25-9595-6adae4fcecb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g9ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g9ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pvhhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.769139 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-node-log\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.769198 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-slash\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.769219 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3a3e165c-439d-4282-b1e7-179dca439343-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wbl48\" (UID: \"3a3e165c-439d-4282-b1e7-179dca439343\") " pod="openshift-multus/multus-additional-cni-plugins-wbl48" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.769304 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-node-log\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.769330 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-slash\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.769419 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b38ac556-07b2-4e25-9595-6adae4fcecb7-proxy-tls\") pod \"machine-config-daemon-pvhhc\" (UID: \"b38ac556-07b2-4e25-9595-6adae4fcecb7\") " pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.769447 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-host-run-k8s-cni-cncf-io\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.769468 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-host-run-netns\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.769496 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-run-openvswitch\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.769517 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sfxbp\" (UniqueName: \"kubernetes.io/projected/e7c683ba-536f-45e5-89b0-fe14989cad13-kube-api-access-sfxbp\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.769538 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-run-ovn\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.769558 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-run-ovn-kubernetes\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.769576 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-ovnkube-config\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.769599 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xgklm\" (UniqueName: \"kubernetes.io/projected/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-kube-api-access-xgklm\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.769616 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xl62h\" (UniqueName: \"kubernetes.io/projected/a54715ec-382b-4bb8-bef2-f125ee0bae2b-kube-api-access-xl62h\") pod \"node-ca-sg27x\" (UID: \"a54715ec-382b-4bb8-bef2-f125ee0bae2b\") " pod="openshift-image-registry/node-ca-sg27x" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.769631 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3a3e165c-439d-4282-b1e7-179dca439343-os-release\") pod \"multus-additional-cni-plugins-wbl48\" (UID: \"3a3e165c-439d-4282-b1e7-179dca439343\") " pod="openshift-multus/multus-additional-cni-plugins-wbl48" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.769651 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/3a3e165c-439d-4282-b1e7-179dca439343-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-wbl48\" (UID: \"3a3e165c-439d-4282-b1e7-179dca439343\") " pod="openshift-multus/multus-additional-cni-plugins-wbl48" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.769669 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a54715ec-382b-4bb8-bef2-f125ee0bae2b-serviceca\") pod \"node-ca-sg27x\" (UID: \"a54715ec-382b-4bb8-bef2-f125ee0bae2b\") " pod="openshift-image-registry/node-ca-sg27x" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.769702 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.769716 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b38ac556-07b2-4e25-9595-6adae4fcecb7-rootfs\") pod \"machine-config-daemon-pvhhc\" (UID: \"b38ac556-07b2-4e25-9595-6adae4fcecb7\") " pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.769730 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e7c683ba-536f-45e5-89b0-fe14989cad13-cni-binary-copy\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.769744 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-host-var-lib-cni-multus\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.769763 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8g9ft\" (UniqueName: \"kubernetes.io/projected/b38ac556-07b2-4e25-9595-6adae4fcecb7-kube-api-access-8g9ft\") pod \"machine-config-daemon-pvhhc\" (UID: \"b38ac556-07b2-4e25-9595-6adae4fcecb7\") " pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.769777 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-multus-conf-dir\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.769797 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zkm4v\" (UniqueName: \"kubernetes.io/projected/89d5aad2-7968-4ff9-a9fa-50a133a77df8-kube-api-access-zkm4v\") pod \"ovnkube-control-plane-57b78d8988-79jfj\" (UID: \"89d5aad2-7968-4ff9-a9fa-50a133a77df8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.769822 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-kubelet\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.769842 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-cni-netd\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.769862 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3a3e165c-439d-4282-b1e7-179dca439343-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wbl48\" (UID: \"3a3e165c-439d-4282-b1e7-179dca439343\") " pod="openshift-multus/multus-additional-cni-plugins-wbl48" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.769894 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b38ac556-07b2-4e25-9595-6adae4fcecb7-mcd-auth-proxy-config\") pod \"machine-config-daemon-pvhhc\" (UID: \"b38ac556-07b2-4e25-9595-6adae4fcecb7\") " pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.769917 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-run-netns\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.769937 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-cni-bin\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.769958 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-ovnkube-script-lib\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.769981 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j2wz8\" (UniqueName: \"kubernetes.io/projected/379e5b28-21b4-4727-a60f-0fad71bf89fa-kube-api-access-j2wz8\") pod \"node-resolver-49rgv\" (UID: \"379e5b28-21b4-4727-a60f-0fad71bf89fa\") " pod="openshift-dns/node-resolver-49rgv" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770000 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-os-release\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770012 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3a3e165c-439d-4282-b1e7-179dca439343-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wbl48\" (UID: \"3a3e165c-439d-4282-b1e7-179dca439343\") " pod="openshift-multus/multus-additional-cni-plugins-wbl48" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770020 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e7c683ba-536f-45e5-89b0-fe14989cad13-multus-daemon-config\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770042 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-ovn-node-metrics-cert\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770066 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wtlfr\" (UniqueName: \"kubernetes.io/projected/48d8f4a9-0b40-486c-ac70-597d1fab05c1-kube-api-access-wtlfr\") pod \"network-metrics-daemon-gjs2g\" (UID: \"48d8f4a9-0b40-486c-ac70-597d1fab05c1\") " pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770090 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3a3e165c-439d-4282-b1e7-179dca439343-system-cni-dir\") pod \"multus-additional-cni-plugins-wbl48\" (UID: \"3a3e165c-439d-4282-b1e7-179dca439343\") " pod="openshift-multus/multus-additional-cni-plugins-wbl48" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770110 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-multus-cni-dir\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770174 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-systemd-units\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770198 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770201 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-host-run-k8s-cni-cncf-io\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770223 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j9xxc\" (UniqueName: \"kubernetes.io/projected/3a3e165c-439d-4282-b1e7-179dca439343-kube-api-access-j9xxc\") pod \"multus-additional-cni-plugins-wbl48\" (UID: \"3a3e165c-439d-4282-b1e7-179dca439343\") " pod="openshift-multus/multus-additional-cni-plugins-wbl48" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770245 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-cnibin\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770269 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-host-run-multus-certs\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770300 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-run-openvswitch\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770312 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/89d5aad2-7968-4ff9-a9fa-50a133a77df8-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-79jfj\" (UID: \"89d5aad2-7968-4ff9-a9fa-50a133a77df8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770337 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770365 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-run-systemd\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770389 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-var-lib-openvswitch\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770412 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-etc-openvswitch\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770435 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-hostroot\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770459 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/89d5aad2-7968-4ff9-a9fa-50a133a77df8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-79jfj\" (UID: \"89d5aad2-7968-4ff9-a9fa-50a133a77df8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770485 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-run-ovn\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770487 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-multus-socket-dir-parent\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770526 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-host-var-lib-cni-bin\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770538 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-multus-socket-dir-parent\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770550 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-etc-kubernetes\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770574 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-kubelet\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770575 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3a3e165c-439d-4282-b1e7-179dca439343-cnibin\") pod \"multus-additional-cni-plugins-wbl48\" (UID: \"3a3e165c-439d-4282-b1e7-179dca439343\") " pod="openshift-multus/multus-additional-cni-plugins-wbl48" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770604 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3a3e165c-439d-4282-b1e7-179dca439343-cnibin\") pod \"multus-additional-cni-plugins-wbl48\" (UID: \"3a3e165c-439d-4282-b1e7-179dca439343\") " pod="openshift-multus/multus-additional-cni-plugins-wbl48" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770608 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3a3e165c-439d-4282-b1e7-179dca439343-cni-binary-copy\") pod \"multus-additional-cni-plugins-wbl48\" (UID: \"3a3e165c-439d-4282-b1e7-179dca439343\") " pod="openshift-multus/multus-additional-cni-plugins-wbl48" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770646 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-env-overrides\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770668 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48d8f4a9-0b40-486c-ac70-597d1fab05c1-metrics-certs\") pod \"network-metrics-daemon-gjs2g\" (UID: \"48d8f4a9-0b40-486c-ac70-597d1fab05c1\") " pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770690 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/379e5b28-21b4-4727-a60f-0fad71bf89fa-tmp-dir\") pod \"node-resolver-49rgv\" (UID: \"379e5b28-21b4-4727-a60f-0fad71bf89fa\") " pod="openshift-dns/node-resolver-49rgv" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770713 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-host-var-lib-kubelet\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770738 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-log-socket\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770757 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a54715ec-382b-4bb8-bef2-f125ee0bae2b-host\") pod \"node-ca-sg27x\" (UID: \"a54715ec-382b-4bb8-bef2-f125ee0bae2b\") " pod="openshift-image-registry/node-ca-sg27x" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770777 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-system-cni-dir\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770799 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/89d5aad2-7968-4ff9-a9fa-50a133a77df8-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-79jfj\" (UID: \"89d5aad2-7968-4ff9-a9fa-50a133a77df8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770838 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/379e5b28-21b4-4727-a60f-0fad71bf89fa-hosts-file\") pod \"node-resolver-49rgv\" (UID: \"379e5b28-21b4-4727-a60f-0fad71bf89fa\") " pod="openshift-dns/node-resolver-49rgv" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770884 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770901 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770915 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770930 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770947 5114 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770959 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.771004 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/379e5b28-21b4-4727-a60f-0fad71bf89fa-hosts-file\") pod \"node-resolver-49rgv\" (UID: \"379e5b28-21b4-4727-a60f-0fad71bf89fa\") " pod="openshift-dns/node-resolver-49rgv" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.771039 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-run-ovn-kubernetes\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.771087 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a54715ec-382b-4bb8-bef2-f125ee0bae2b-serviceca\") pod \"node-ca-sg27x\" (UID: \"a54715ec-382b-4bb8-bef2-f125ee0bae2b\") " pod="openshift-image-registry/node-ca-sg27x" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.771261 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3a3e165c-439d-4282-b1e7-179dca439343-cni-binary-copy\") pod \"multus-additional-cni-plugins-wbl48\" (UID: \"3a3e165c-439d-4282-b1e7-179dca439343\") " pod="openshift-multus/multus-additional-cni-plugins-wbl48" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.771333 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-cni-netd\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.771420 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3a3e165c-439d-4282-b1e7-179dca439343-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wbl48\" (UID: \"3a3e165c-439d-4282-b1e7-179dca439343\") " pod="openshift-multus/multus-additional-cni-plugins-wbl48" Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.771467 5114 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.771562 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48d8f4a9-0b40-486c-ac70-597d1fab05c1-metrics-certs podName:48d8f4a9-0b40-486c-ac70-597d1fab05c1 nodeName:}" failed. No retries permitted until 2025-12-10 15:47:22.271543166 +0000 UTC m=+67.992344343 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/48d8f4a9-0b40-486c-ac70-597d1fab05c1-metrics-certs") pod "network-metrics-daemon-gjs2g" (UID: "48d8f4a9-0b40-486c-ac70-597d1fab05c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.771716 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/3a3e165c-439d-4282-b1e7-179dca439343-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-wbl48\" (UID: \"3a3e165c-439d-4282-b1e7-179dca439343\") " pod="openshift-multus/multus-additional-cni-plugins-wbl48" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.771758 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-ovnkube-config\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.771901 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-host-var-lib-cni-bin\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.771949 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-etc-kubernetes\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.772007 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b38ac556-07b2-4e25-9595-6adae4fcecb7-mcd-auth-proxy-config\") pod \"machine-config-daemon-pvhhc\" (UID: \"b38ac556-07b2-4e25-9595-6adae4fcecb7\") " pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770244 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-host-run-netns\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.772074 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/379e5b28-21b4-4727-a60f-0fad71bf89fa-tmp-dir\") pod \"node-resolver-49rgv\" (UID: \"379e5b28-21b4-4727-a60f-0fad71bf89fa\") " pod="openshift-dns/node-resolver-49rgv" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.770341 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3a3e165c-439d-4282-b1e7-179dca439343-os-release\") pod \"multus-additional-cni-plugins-wbl48\" (UID: \"3a3e165c-439d-4282-b1e7-179dca439343\") " pod="openshift-multus/multus-additional-cni-plugins-wbl48" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.772390 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b38ac556-07b2-4e25-9595-6adae4fcecb7-proxy-tls\") pod \"machine-config-daemon-pvhhc\" (UID: \"b38ac556-07b2-4e25-9595-6adae4fcecb7\") " pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.772584 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.772839 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-run-netns\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.772883 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-cni-bin\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.773595 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-ovnkube-script-lib\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.774243 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-os-release\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.774649 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3a3e165c-439d-4282-b1e7-179dca439343-system-cni-dir\") pod \"multus-additional-cni-plugins-wbl48\" (UID: \"3a3e165c-439d-4282-b1e7-179dca439343\") " pod="openshift-multus/multus-additional-cni-plugins-wbl48" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.774712 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-multus-cni-dir\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.774723 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-systemd-units\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.774775 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.774905 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-cnibin\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.775262 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-host-var-lib-kubelet\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.775344 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-log-socket\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.776876 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e7c683ba-536f-45e5-89b0-fe14989cad13-multus-daemon-config\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.778895 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-lg6m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7c683ba-536f-45e5-89b0-fe14989cad13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfxbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lg6m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.780063 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-ovn-node-metrics-cert\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.783645 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a54715ec-382b-4bb8-bef2-f125ee0bae2b-host\") pod \"node-ca-sg27x\" (UID: \"a54715ec-382b-4bb8-bef2-f125ee0bae2b\") " pod="openshift-image-registry/node-ca-sg27x" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.783701 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-system-cni-dir\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.784198 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/89d5aad2-7968-4ff9-a9fa-50a133a77df8-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-79jfj\" (UID: \"89d5aad2-7968-4ff9-a9fa-50a133a77df8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.784242 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-host-run-multus-certs\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.784748 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-env-overrides\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.784793 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.784820 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-var-lib-openvswitch\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.784844 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-etc-openvswitch\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.784872 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-hostroot\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.784894 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-run-systemd\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.785176 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-host-var-lib-cni-multus\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.785236 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b38ac556-07b2-4e25-9595-6adae4fcecb7-rootfs\") pod \"machine-config-daemon-pvhhc\" (UID: \"b38ac556-07b2-4e25-9595-6adae4fcecb7\") " pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.785441 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/89d5aad2-7968-4ff9-a9fa-50a133a77df8-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-79jfj\" (UID: \"89d5aad2-7968-4ff9-a9fa-50a133a77df8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.785636 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e7c683ba-536f-45e5-89b0-fe14989cad13-multus-conf-dir\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.786163 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e7c683ba-536f-45e5-89b0-fe14989cad13-cni-binary-copy\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.787334 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfxbp\" (UniqueName: \"kubernetes.io/projected/e7c683ba-536f-45e5-89b0-fe14989cad13-kube-api-access-sfxbp\") pod \"multus-lg6m5\" (UID: \"e7c683ba-536f-45e5-89b0-fe14989cad13\") " pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.788519 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl62h\" (UniqueName: \"kubernetes.io/projected/a54715ec-382b-4bb8-bef2-f125ee0bae2b-kube-api-access-xl62h\") pod \"node-ca-sg27x\" (UID: \"a54715ec-382b-4bb8-bef2-f125ee0bae2b\") " pod="openshift-image-registry/node-ca-sg27x" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.789105 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/89d5aad2-7968-4ff9-a9fa-50a133a77df8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-79jfj\" (UID: \"89d5aad2-7968-4ff9-a9fa-50a133a77df8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.789747 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gjs2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d8f4a9-0b40-486c-ac70-597d1fab05c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtlfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtlfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gjs2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.790490 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtlfr\" (UniqueName: \"kubernetes.io/projected/48d8f4a9-0b40-486c-ac70-597d1fab05c1-kube-api-access-wtlfr\") pod \"network-metrics-daemon-gjs2g\" (UID: \"48d8f4a9-0b40-486c-ac70-597d1fab05c1\") " pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.792198 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkm4v\" (UniqueName: \"kubernetes.io/projected/89d5aad2-7968-4ff9-a9fa-50a133a77df8-kube-api-access-zkm4v\") pod \"ovnkube-control-plane-57b78d8988-79jfj\" (UID: \"89d5aad2-7968-4ff9-a9fa-50a133a77df8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.794024 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgklm\" (UniqueName: \"kubernetes.io/projected/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-kube-api-access-xgklm\") pod \"ovnkube-node-bgfnl\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.795198 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9xxc\" (UniqueName: \"kubernetes.io/projected/3a3e165c-439d-4282-b1e7-179dca439343-kube-api-access-j9xxc\") pod \"multus-additional-cni-plugins-wbl48\" (UID: \"3a3e165c-439d-4282-b1e7-179dca439343\") " pod="openshift-multus/multus-additional-cni-plugins-wbl48" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.795936 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2wz8\" (UniqueName: \"kubernetes.io/projected/379e5b28-21b4-4727-a60f-0fad71bf89fa-kube-api-access-j2wz8\") pod \"node-resolver-49rgv\" (UID: \"379e5b28-21b4-4727-a60f-0fad71bf89fa\") " pod="openshift-dns/node-resolver-49rgv" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.797183 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-49rgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"379e5b28-21b4-4727-a60f-0fad71bf89fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2wz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-49rgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.801610 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g9ft\" (UniqueName: \"kubernetes.io/projected/b38ac556-07b2-4e25-9595-6adae4fcecb7-kube-api-access-8g9ft\") pod \"machine-config-daemon-pvhhc\" (UID: \"b38ac556-07b2-4e25-9595-6adae4fcecb7\") " pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.804862 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sg27x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a54715ec-382b-4bb8-bef2-f125ee0bae2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl62h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sg27x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.812880 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89d5aad2-7968-4ff9-a9fa-50a133a77df8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkm4v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkm4v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-79jfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.820934 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fa5e9e-e71a-458f-88e7-57d296462452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b63509d96fe3793fb1dffe2943da9a38a875dd373fbad85638d39878168af249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://108af1094b4ecac73d954933b32171f5e697d11d78490d831db63f315177de7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://108af1094b4ecac73d954933b32171f5e697d11d78490d831db63f315177de7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.830982 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.831350 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.832830 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"d79fc0ad78427693b9ef01519261c475c49b29ab8dc64210c09f22886b3dcfad"} Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.833342 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.841785 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.843406 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.849688 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.853842 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:21 crc kubenswrapper[5114]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Dec 10 15:47:21 crc kubenswrapper[5114]: set -o allexport Dec 10 15:47:21 crc kubenswrapper[5114]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 10 15:47:21 crc kubenswrapper[5114]: source /etc/kubernetes/apiserver-url.env Dec 10 15:47:21 crc kubenswrapper[5114]: else Dec 10 15:47:21 crc kubenswrapper[5114]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 10 15:47:21 crc kubenswrapper[5114]: exit 1 Dec 10 15:47:21 crc kubenswrapper[5114]: fi Dec 10 15:47:21 crc kubenswrapper[5114]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 10 15:47:21 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:21 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.855027 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.855808 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.861218 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.863577 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:21 crc kubenswrapper[5114]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 10 15:47:21 crc kubenswrapper[5114]: if [[ -f "/env/_master" ]]; then Dec 10 15:47:21 crc kubenswrapper[5114]: set -o allexport Dec 10 15:47:21 crc kubenswrapper[5114]: source "/env/_master" Dec 10 15:47:21 crc kubenswrapper[5114]: set +o allexport Dec 10 15:47:21 crc kubenswrapper[5114]: fi Dec 10 15:47:21 crc kubenswrapper[5114]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 10 15:47:21 crc kubenswrapper[5114]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 10 15:47:21 crc kubenswrapper[5114]: ho_enable="--enable-hybrid-overlay" Dec 10 15:47:21 crc kubenswrapper[5114]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 10 15:47:21 crc kubenswrapper[5114]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 10 15:47:21 crc kubenswrapper[5114]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 10 15:47:21 crc kubenswrapper[5114]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 10 15:47:21 crc kubenswrapper[5114]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 10 15:47:21 crc kubenswrapper[5114]: --webhook-host=127.0.0.1 \ Dec 10 15:47:21 crc kubenswrapper[5114]: --webhook-port=9743 \ Dec 10 15:47:21 crc kubenswrapper[5114]: ${ho_enable} \ Dec 10 15:47:21 crc kubenswrapper[5114]: --enable-interconnect \ Dec 10 15:47:21 crc kubenswrapper[5114]: --disable-approver \ Dec 10 15:47:21 crc kubenswrapper[5114]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 10 15:47:21 crc kubenswrapper[5114]: --wait-for-kubernetes-api=200s \ Dec 10 15:47:21 crc kubenswrapper[5114]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 10 15:47:21 crc kubenswrapper[5114]: --loglevel="${LOGLEVEL}" Dec 10 15:47:21 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:21 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.866627 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:21 crc kubenswrapper[5114]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 10 15:47:21 crc kubenswrapper[5114]: if [[ -f "/env/_master" ]]; then Dec 10 15:47:21 crc kubenswrapper[5114]: set -o allexport Dec 10 15:47:21 crc kubenswrapper[5114]: source "/env/_master" Dec 10 15:47:21 crc kubenswrapper[5114]: set +o allexport Dec 10 15:47:21 crc kubenswrapper[5114]: fi Dec 10 15:47:21 crc kubenswrapper[5114]: Dec 10 15:47:21 crc kubenswrapper[5114]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 10 15:47:21 crc kubenswrapper[5114]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 10 15:47:21 crc kubenswrapper[5114]: --disable-webhook \ Dec 10 15:47:21 crc kubenswrapper[5114]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 10 15:47:21 crc kubenswrapper[5114]: --loglevel="${LOGLEVEL}" Dec 10 15:47:21 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:21 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.868213 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.870504 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bgfnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: W1210 15:47:21.874864 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-acad417964c05a99524067ddb95a60b966f7790623a92c0159a2d11bd74f8331 WatchSource:0}: Error finding container acad417964c05a99524067ddb95a60b966f7790623a92c0159a2d11bd74f8331: Status 404 returned error can't find the container with id acad417964c05a99524067ddb95a60b966f7790623a92c0159a2d11bd74f8331 Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.875543 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.877303 5114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.878480 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.879836 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4f07611-baa7-42a7-8607-306ed57fb75c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://800d1520c7107344f8b6d771d0fecfb9ca2644d8efe597cabd69c5de72a571ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ec7a41d072aa02f59def36f4c2802872ef70cbd48046c3e3d6f6ccd6b254c53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c19e0260e8980b12b59f394a8355cee2eee1dc159e14081a0ff23cebdd4e9f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1daca1262ac174a242cff74011ab4da1c00a8caaf4bc44b58af5400ae24d3226\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.884458 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:47:21 crc kubenswrapper[5114]: W1210 15:47:21.887788 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb38ac556_07b2_4e25_9595_6adae4fcecb7.slice/crio-8405b6b0f18ce90d94d73840a5d2ed017e153c4d9e67ff541cc3c6023b54e5f5 WatchSource:0}: Error finding container 8405b6b0f18ce90d94d73840a5d2ed017e153c4d9e67ff541cc3c6023b54e5f5: Status 404 returned error can't find the container with id 8405b6b0f18ce90d94d73840a5d2ed017e153c4d9e67ff541cc3c6023b54e5f5 Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.893318 5114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8g9ft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-pvhhc_openshift-machine-config-operator(b38ac556-07b2-4e25-9595-6adae4fcecb7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 10 15:47:21 crc kubenswrapper[5114]: W1210 15:47:21.894387 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bef68a8_63de_4992_87b6_3dc6c70f5a1d.slice/crio-a199cf1da8fb7790abbb3c746b8ca2bfcd2d855529f7600f1eee455a9ec8496b WatchSource:0}: Error finding container a199cf1da8fb7790abbb3c746b8ca2bfcd2d855529f7600f1eee455a9ec8496b: Status 404 returned error can't find the container with id a199cf1da8fb7790abbb3c746b8ca2bfcd2d855529f7600f1eee455a9ec8496b Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.897036 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:21 crc kubenswrapper[5114]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 10 15:47:21 crc kubenswrapper[5114]: apiVersion: v1 Dec 10 15:47:21 crc kubenswrapper[5114]: clusters: Dec 10 15:47:21 crc kubenswrapper[5114]: - cluster: Dec 10 15:47:21 crc kubenswrapper[5114]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 10 15:47:21 crc kubenswrapper[5114]: server: https://api-int.crc.testing:6443 Dec 10 15:47:21 crc kubenswrapper[5114]: name: default-cluster Dec 10 15:47:21 crc kubenswrapper[5114]: contexts: Dec 10 15:47:21 crc kubenswrapper[5114]: - context: Dec 10 15:47:21 crc kubenswrapper[5114]: cluster: default-cluster Dec 10 15:47:21 crc kubenswrapper[5114]: namespace: default Dec 10 15:47:21 crc kubenswrapper[5114]: user: default-auth Dec 10 15:47:21 crc kubenswrapper[5114]: name: default-context Dec 10 15:47:21 crc kubenswrapper[5114]: current-context: default-context Dec 10 15:47:21 crc kubenswrapper[5114]: kind: Config Dec 10 15:47:21 crc kubenswrapper[5114]: preferences: {} Dec 10 15:47:21 crc kubenswrapper[5114]: users: Dec 10 15:47:21 crc kubenswrapper[5114]: - name: default-auth Dec 10 15:47:21 crc kubenswrapper[5114]: user: Dec 10 15:47:21 crc kubenswrapper[5114]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 10 15:47:21 crc kubenswrapper[5114]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 10 15:47:21 crc kubenswrapper[5114]: EOF Dec 10 15:47:21 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xgklm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-bgfnl_openshift-ovn-kubernetes(5bef68a8-63de-4992-87b6-3dc6c70f5a1d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:21 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.897201 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-49rgv" Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.897288 5114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8g9ft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-pvhhc_openshift-machine-config-operator(b38ac556-07b2-4e25-9595-6adae4fcecb7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.898847 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.898980 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14d2b4c9-40f0-4dcb-ad8c-0fe4a5304563\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://85e77e659fccf9ba6e2cc6e99afbafd6be1703e401429ba871243247e0c20a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://447746eb6e190728d80f154f34d6c4c3cd6a364d95c18a4c109e1a2d00fbcf27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://251a7ed18067c8bcbcbcb38700fe905a2a4ebf34fef9f02a6ffc9f78a334bc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43234809c1296bc87d3909492e145b0720e62cf92728f1f24baeac176f8cfc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://4654b1e58183f9508823b58dc37a09482feafd97c887cc56f9d1c793999ee516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://101e3958feb79a37918d043f01289b15aa43519052915151289b2df11a4c798e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://101e3958feb79a37918d043f01289b15aa43519052915151289b2df11a4c798e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://000c0ac3fe264d2edae20d00ae4b904a9c638f104925be4c2999a32625c2384e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://000c0ac3fe264d2edae20d00ae4b904a9c638f104925be4c2999a32625c2384e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://90da8daaae30e60295160aefe8748f6cf28eda2cd17d933569c0320aebc57f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90da8daaae30e60295160aefe8748f6cf28eda2cd17d933569c0320aebc57f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.899049 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" podUID="b38ac556-07b2-4e25-9595-6adae4fcecb7" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.908347 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e331166d-a33f-44c1-9a3e-f43cfee598a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://c9a7475ba48862dfcb11fe65264384be264b4b7acd30761bc650e70dd3a78abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7398b71862f7cfabefc5644c5d6b4924bbde47edadad7f240aa37599d2b3da9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://55ad03eb1a337191c414a5dbd0864a29632396ff234b68505a9a4b65c90d8eb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d79fc0ad78427693b9ef01519261c475c49b29ab8dc64210c09f22886b3dcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1c010c37667d5c045e43048e4405a03d43afd6ebe7774038d9d5a5c5bb8aaf4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T15:47:00Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1210 15:46:59.465586 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 15:46:59.465755 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1210 15:46:59.466800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823188907/tls.crt::/tmp/serving-cert-3823188907/tls.key\\\\\\\"\\\\nI1210 15:47:00.080067 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 15:47:00.081594 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 15:47:00.081609 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 15:47:00.081631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 15:47:00.081635 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 15:47:00.084952 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 15:47:00.084970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 15:47:00.084974 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 15:47:00.084979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 15:47:00.084982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 15:47:00.084984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 15:47:00.084987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 15:47:00.085095 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 15:47:00.088454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:47:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f8dd78b836cacc6ac7bee1a11730500c94192df5a045eb37ae1c137a3cc0ad6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7e3d3b6b0e188659783d2b384d22a05ba8962e4fa49cd4caae040921c9add613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e3d3b6b0e188659783d2b384d22a05ba8962e4fa49cd4caae040921c9add613\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: W1210 15:47:21.909645 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod379e5b28_21b4_4727_a60f_0fad71bf89fa.slice/crio-68ed71554505d070e4fc7e2ad6e6d9973c4c1c069894e9be8f9bbf05df9de042 WatchSource:0}: Error finding container 68ed71554505d070e4fc7e2ad6e6d9973c4c1c069894e9be8f9bbf05df9de042: Status 404 returned error can't find the container with id 68ed71554505d070e4fc7e2ad6e6d9973c4c1c069894e9be8f9bbf05df9de042 Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.912103 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:21 crc kubenswrapper[5114]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Dec 10 15:47:21 crc kubenswrapper[5114]: set -uo pipefail Dec 10 15:47:21 crc kubenswrapper[5114]: Dec 10 15:47:21 crc kubenswrapper[5114]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 10 15:47:21 crc kubenswrapper[5114]: Dec 10 15:47:21 crc kubenswrapper[5114]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 10 15:47:21 crc kubenswrapper[5114]: HOSTS_FILE="/etc/hosts" Dec 10 15:47:21 crc kubenswrapper[5114]: TEMP_FILE="/tmp/hosts.tmp" Dec 10 15:47:21 crc kubenswrapper[5114]: Dec 10 15:47:21 crc kubenswrapper[5114]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 10 15:47:21 crc kubenswrapper[5114]: Dec 10 15:47:21 crc kubenswrapper[5114]: # Make a temporary file with the old hosts file's attributes. Dec 10 15:47:21 crc kubenswrapper[5114]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 10 15:47:21 crc kubenswrapper[5114]: echo "Failed to preserve hosts file. Exiting." Dec 10 15:47:21 crc kubenswrapper[5114]: exit 1 Dec 10 15:47:21 crc kubenswrapper[5114]: fi Dec 10 15:47:21 crc kubenswrapper[5114]: Dec 10 15:47:21 crc kubenswrapper[5114]: while true; do Dec 10 15:47:21 crc kubenswrapper[5114]: declare -A svc_ips Dec 10 15:47:21 crc kubenswrapper[5114]: for svc in "${services[@]}"; do Dec 10 15:47:21 crc kubenswrapper[5114]: # Fetch service IP from cluster dns if present. We make several tries Dec 10 15:47:21 crc kubenswrapper[5114]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 10 15:47:21 crc kubenswrapper[5114]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 10 15:47:21 crc kubenswrapper[5114]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 10 15:47:21 crc kubenswrapper[5114]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 10 15:47:21 crc kubenswrapper[5114]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 10 15:47:21 crc kubenswrapper[5114]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 10 15:47:21 crc kubenswrapper[5114]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 10 15:47:21 crc kubenswrapper[5114]: for i in ${!cmds[*]} Dec 10 15:47:21 crc kubenswrapper[5114]: do Dec 10 15:47:21 crc kubenswrapper[5114]: ips=($(eval "${cmds[i]}")) Dec 10 15:47:21 crc kubenswrapper[5114]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 10 15:47:21 crc kubenswrapper[5114]: svc_ips["${svc}"]="${ips[@]}" Dec 10 15:47:21 crc kubenswrapper[5114]: break Dec 10 15:47:21 crc kubenswrapper[5114]: fi Dec 10 15:47:21 crc kubenswrapper[5114]: done Dec 10 15:47:21 crc kubenswrapper[5114]: done Dec 10 15:47:21 crc kubenswrapper[5114]: Dec 10 15:47:21 crc kubenswrapper[5114]: # Update /etc/hosts only if we get valid service IPs Dec 10 15:47:21 crc kubenswrapper[5114]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 10 15:47:21 crc kubenswrapper[5114]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 10 15:47:21 crc kubenswrapper[5114]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 10 15:47:21 crc kubenswrapper[5114]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 10 15:47:21 crc kubenswrapper[5114]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 10 15:47:21 crc kubenswrapper[5114]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 10 15:47:21 crc kubenswrapper[5114]: sleep 60 & wait Dec 10 15:47:21 crc kubenswrapper[5114]: continue Dec 10 15:47:21 crc kubenswrapper[5114]: fi Dec 10 15:47:21 crc kubenswrapper[5114]: Dec 10 15:47:21 crc kubenswrapper[5114]: # Append resolver entries for services Dec 10 15:47:21 crc kubenswrapper[5114]: rc=0 Dec 10 15:47:21 crc kubenswrapper[5114]: for svc in "${!svc_ips[@]}"; do Dec 10 15:47:21 crc kubenswrapper[5114]: for ip in ${svc_ips[${svc}]}; do Dec 10 15:47:21 crc kubenswrapper[5114]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 10 15:47:21 crc kubenswrapper[5114]: done Dec 10 15:47:21 crc kubenswrapper[5114]: done Dec 10 15:47:21 crc kubenswrapper[5114]: if [[ $rc -ne 0 ]]; then Dec 10 15:47:21 crc kubenswrapper[5114]: sleep 60 & wait Dec 10 15:47:21 crc kubenswrapper[5114]: continue Dec 10 15:47:21 crc kubenswrapper[5114]: fi Dec 10 15:47:21 crc kubenswrapper[5114]: Dec 10 15:47:21 crc kubenswrapper[5114]: Dec 10 15:47:21 crc kubenswrapper[5114]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 10 15:47:21 crc kubenswrapper[5114]: # Replace /etc/hosts with our modified version if needed Dec 10 15:47:21 crc kubenswrapper[5114]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 10 15:47:21 crc kubenswrapper[5114]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 10 15:47:21 crc kubenswrapper[5114]: fi Dec 10 15:47:21 crc kubenswrapper[5114]: sleep 60 & wait Dec 10 15:47:21 crc kubenswrapper[5114]: unset svc_ips Dec 10 15:47:21 crc kubenswrapper[5114]: done Dec 10 15:47:21 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j2wz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-49rgv_openshift-dns(379e5b28-21b4-4727-a60f-0fad71bf89fa): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:21 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.913372 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-49rgv" podUID="379e5b28-21b4-4727-a60f-0fad71bf89fa" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.919387 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.921404 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lg6m5" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.929677 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.934646 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-sg27x" Dec 10 15:47:21 crc kubenswrapper[5114]: W1210 15:47:21.935061 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7c683ba_536f_45e5_89b0_fe14989cad13.slice/crio-857e236d8e95408dd720ee6b728ed964327eb80f0265c864036079ea941ad944 WatchSource:0}: Error finding container 857e236d8e95408dd720ee6b728ed964327eb80f0265c864036079ea941ad944: Status 404 returned error can't find the container with id 857e236d8e95408dd720ee6b728ed964327eb80f0265c864036079ea941ad944 Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.939187 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:21 crc kubenswrapper[5114]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Dec 10 15:47:21 crc kubenswrapper[5114]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Dec 10 15:47:21 crc kubenswrapper[5114]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sfxbp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-lg6m5_openshift-multus(e7c683ba-536f-45e5-89b0-fe14989cad13): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:21 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.942231 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-lg6m5" podUID="e7c683ba-536f-45e5-89b0-fe14989cad13" Dec 10 15:47:21 crc kubenswrapper[5114]: W1210 15:47:21.944974 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda54715ec_382b_4bb8_bef2_f125ee0bae2b.slice/crio-d55b5d1a0f756702f3ac1b0514f4e6524c696c9aafaf7eef0b68cd237ff88ea0 WatchSource:0}: Error finding container d55b5d1a0f756702f3ac1b0514f4e6524c696c9aafaf7eef0b68cd237ff88ea0: Status 404 returned error can't find the container with id d55b5d1a0f756702f3ac1b0514f4e6524c696c9aafaf7eef0b68cd237ff88ea0 Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.946545 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:21 crc kubenswrapper[5114]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Dec 10 15:47:21 crc kubenswrapper[5114]: while [ true ]; Dec 10 15:47:21 crc kubenswrapper[5114]: do Dec 10 15:47:21 crc kubenswrapper[5114]: for f in $(ls /tmp/serviceca); do Dec 10 15:47:21 crc kubenswrapper[5114]: echo $f Dec 10 15:47:21 crc kubenswrapper[5114]: ca_file_path="/tmp/serviceca/${f}" Dec 10 15:47:21 crc kubenswrapper[5114]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Dec 10 15:47:21 crc kubenswrapper[5114]: reg_dir_path="/etc/docker/certs.d/${f}" Dec 10 15:47:21 crc kubenswrapper[5114]: if [ -e "${reg_dir_path}" ]; then Dec 10 15:47:21 crc kubenswrapper[5114]: cp -u $ca_file_path $reg_dir_path/ca.crt Dec 10 15:47:21 crc kubenswrapper[5114]: else Dec 10 15:47:21 crc kubenswrapper[5114]: mkdir $reg_dir_path Dec 10 15:47:21 crc kubenswrapper[5114]: cp $ca_file_path $reg_dir_path/ca.crt Dec 10 15:47:21 crc kubenswrapper[5114]: fi Dec 10 15:47:21 crc kubenswrapper[5114]: done Dec 10 15:47:21 crc kubenswrapper[5114]: for d in $(ls /etc/docker/certs.d); do Dec 10 15:47:21 crc kubenswrapper[5114]: echo $d Dec 10 15:47:21 crc kubenswrapper[5114]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Dec 10 15:47:21 crc kubenswrapper[5114]: reg_conf_path="/tmp/serviceca/${dp}" Dec 10 15:47:21 crc kubenswrapper[5114]: if [ ! -e "${reg_conf_path}" ]; then Dec 10 15:47:21 crc kubenswrapper[5114]: rm -rf /etc/docker/certs.d/$d Dec 10 15:47:21 crc kubenswrapper[5114]: fi Dec 10 15:47:21 crc kubenswrapper[5114]: done Dec 10 15:47:21 crc kubenswrapper[5114]: sleep 60 & wait ${!} Dec 10 15:47:21 crc kubenswrapper[5114]: done Dec 10 15:47:21 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xl62h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-sg27x_openshift-image-registry(a54715ec-382b-4bb8-bef2-f125ee0bae2b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:21 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.948516 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-sg27x" podUID="a54715ec-382b-4bb8-bef2-f125ee0bae2b" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.952023 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-wbl48" Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.957605 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:21 crc kubenswrapper[5114]: W1210 15:47:21.965138 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a3e165c_439d_4282_b1e7_179dca439343.slice/crio-ae57bcfda7fde3f116d412ad5c387292cff2832904c7cde3370bcc0aa1aa98f1 WatchSource:0}: Error finding container ae57bcfda7fde3f116d412ad5c387292cff2832904c7cde3370bcc0aa1aa98f1: Status 404 returned error can't find the container with id ae57bcfda7fde3f116d412ad5c387292cff2832904c7cde3370bcc0aa1aa98f1 Dec 10 15:47:21 crc kubenswrapper[5114]: I1210 15:47:21.966454 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.967158 5114 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j9xxc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-wbl48_openshift-multus(3a3e165c-439d-4282-b1e7-179dca439343): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.968613 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-wbl48" podUID="3a3e165c-439d-4282-b1e7-179dca439343" Dec 10 15:47:21 crc kubenswrapper[5114]: W1210 15:47:21.978494 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89d5aad2_7968_4ff9_a9fa_50a133a77df8.slice/crio-715ef84ad14f85866a9983d9bff96f891290de463b18f5e8b09f2d89451140e8 WatchSource:0}: Error finding container 715ef84ad14f85866a9983d9bff96f891290de463b18f5e8b09f2d89451140e8: Status 404 returned error can't find the container with id 715ef84ad14f85866a9983d9bff96f891290de463b18f5e8b09f2d89451140e8 Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.980372 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:21 crc kubenswrapper[5114]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Dec 10 15:47:21 crc kubenswrapper[5114]: set -euo pipefail Dec 10 15:47:21 crc kubenswrapper[5114]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Dec 10 15:47:21 crc kubenswrapper[5114]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Dec 10 15:47:21 crc kubenswrapper[5114]: # As the secret mount is optional we must wait for the files to be present. Dec 10 15:47:21 crc kubenswrapper[5114]: # The service is created in monitor.yaml and this is created in sdn.yaml. Dec 10 15:47:21 crc kubenswrapper[5114]: TS=$(date +%s) Dec 10 15:47:21 crc kubenswrapper[5114]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Dec 10 15:47:21 crc kubenswrapper[5114]: HAS_LOGGED_INFO=0 Dec 10 15:47:21 crc kubenswrapper[5114]: Dec 10 15:47:21 crc kubenswrapper[5114]: log_missing_certs(){ Dec 10 15:47:21 crc kubenswrapper[5114]: CUR_TS=$(date +%s) Dec 10 15:47:21 crc kubenswrapper[5114]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Dec 10 15:47:21 crc kubenswrapper[5114]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Dec 10 15:47:21 crc kubenswrapper[5114]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Dec 10 15:47:21 crc kubenswrapper[5114]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Dec 10 15:47:21 crc kubenswrapper[5114]: HAS_LOGGED_INFO=1 Dec 10 15:47:21 crc kubenswrapper[5114]: fi Dec 10 15:47:21 crc kubenswrapper[5114]: } Dec 10 15:47:21 crc kubenswrapper[5114]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Dec 10 15:47:21 crc kubenswrapper[5114]: log_missing_certs Dec 10 15:47:21 crc kubenswrapper[5114]: sleep 5 Dec 10 15:47:21 crc kubenswrapper[5114]: done Dec 10 15:47:21 crc kubenswrapper[5114]: Dec 10 15:47:21 crc kubenswrapper[5114]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Dec 10 15:47:21 crc kubenswrapper[5114]: exec /usr/bin/kube-rbac-proxy \ Dec 10 15:47:21 crc kubenswrapper[5114]: --logtostderr \ Dec 10 15:47:21 crc kubenswrapper[5114]: --secure-listen-address=:9108 \ Dec 10 15:47:21 crc kubenswrapper[5114]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Dec 10 15:47:21 crc kubenswrapper[5114]: --upstream=http://127.0.0.1:29108/ \ Dec 10 15:47:21 crc kubenswrapper[5114]: --tls-private-key-file=${TLS_PK} \ Dec 10 15:47:21 crc kubenswrapper[5114]: --tls-cert-file=${TLS_CERT} Dec 10 15:47:21 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zkm4v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-79jfj_openshift-ovn-kubernetes(89d5aad2-7968-4ff9-a9fa-50a133a77df8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:21 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.982565 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:21 crc kubenswrapper[5114]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 10 15:47:21 crc kubenswrapper[5114]: if [[ -f "/env/_master" ]]; then Dec 10 15:47:21 crc kubenswrapper[5114]: set -o allexport Dec 10 15:47:21 crc kubenswrapper[5114]: source "/env/_master" Dec 10 15:47:21 crc kubenswrapper[5114]: set +o allexport Dec 10 15:47:21 crc kubenswrapper[5114]: fi Dec 10 15:47:21 crc kubenswrapper[5114]: Dec 10 15:47:21 crc kubenswrapper[5114]: ovn_v4_join_subnet_opt= Dec 10 15:47:21 crc kubenswrapper[5114]: if [[ "" != "" ]]; then Dec 10 15:47:21 crc kubenswrapper[5114]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Dec 10 15:47:21 crc kubenswrapper[5114]: fi Dec 10 15:47:21 crc kubenswrapper[5114]: ovn_v6_join_subnet_opt= Dec 10 15:47:21 crc kubenswrapper[5114]: if [[ "" != "" ]]; then Dec 10 15:47:21 crc kubenswrapper[5114]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Dec 10 15:47:21 crc kubenswrapper[5114]: fi Dec 10 15:47:21 crc kubenswrapper[5114]: Dec 10 15:47:21 crc kubenswrapper[5114]: ovn_v4_transit_switch_subnet_opt= Dec 10 15:47:21 crc kubenswrapper[5114]: if [[ "" != "" ]]; then Dec 10 15:47:21 crc kubenswrapper[5114]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Dec 10 15:47:21 crc kubenswrapper[5114]: fi Dec 10 15:47:21 crc kubenswrapper[5114]: ovn_v6_transit_switch_subnet_opt= Dec 10 15:47:21 crc kubenswrapper[5114]: if [[ "" != "" ]]; then Dec 10 15:47:21 crc kubenswrapper[5114]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Dec 10 15:47:21 crc kubenswrapper[5114]: fi Dec 10 15:47:21 crc kubenswrapper[5114]: Dec 10 15:47:21 crc kubenswrapper[5114]: dns_name_resolver_enabled_flag= Dec 10 15:47:21 crc kubenswrapper[5114]: if [[ "false" == "true" ]]; then Dec 10 15:47:21 crc kubenswrapper[5114]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Dec 10 15:47:21 crc kubenswrapper[5114]: fi Dec 10 15:47:21 crc kubenswrapper[5114]: Dec 10 15:47:21 crc kubenswrapper[5114]: persistent_ips_enabled_flag="--enable-persistent-ips" Dec 10 15:47:21 crc kubenswrapper[5114]: Dec 10 15:47:21 crc kubenswrapper[5114]: # This is needed so that converting clusters from GA to TP Dec 10 15:47:21 crc kubenswrapper[5114]: # will rollout control plane pods as well Dec 10 15:47:21 crc kubenswrapper[5114]: network_segmentation_enabled_flag= Dec 10 15:47:21 crc kubenswrapper[5114]: multi_network_enabled_flag= Dec 10 15:47:21 crc kubenswrapper[5114]: if [[ "true" == "true" ]]; then Dec 10 15:47:21 crc kubenswrapper[5114]: multi_network_enabled_flag="--enable-multi-network" Dec 10 15:47:21 crc kubenswrapper[5114]: fi Dec 10 15:47:21 crc kubenswrapper[5114]: if [[ "true" == "true" ]]; then Dec 10 15:47:21 crc kubenswrapper[5114]: if [[ "true" != "true" ]]; then Dec 10 15:47:21 crc kubenswrapper[5114]: multi_network_enabled_flag="--enable-multi-network" Dec 10 15:47:21 crc kubenswrapper[5114]: fi Dec 10 15:47:21 crc kubenswrapper[5114]: network_segmentation_enabled_flag="--enable-network-segmentation" Dec 10 15:47:21 crc kubenswrapper[5114]: fi Dec 10 15:47:21 crc kubenswrapper[5114]: Dec 10 15:47:21 crc kubenswrapper[5114]: route_advertisements_enable_flag= Dec 10 15:47:21 crc kubenswrapper[5114]: if [[ "false" == "true" ]]; then Dec 10 15:47:21 crc kubenswrapper[5114]: route_advertisements_enable_flag="--enable-route-advertisements" Dec 10 15:47:21 crc kubenswrapper[5114]: fi Dec 10 15:47:21 crc kubenswrapper[5114]: Dec 10 15:47:21 crc kubenswrapper[5114]: preconfigured_udn_addresses_enable_flag= Dec 10 15:47:21 crc kubenswrapper[5114]: if [[ "false" == "true" ]]; then Dec 10 15:47:21 crc kubenswrapper[5114]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Dec 10 15:47:21 crc kubenswrapper[5114]: fi Dec 10 15:47:21 crc kubenswrapper[5114]: Dec 10 15:47:21 crc kubenswrapper[5114]: # Enable multi-network policy if configured (control-plane always full mode) Dec 10 15:47:21 crc kubenswrapper[5114]: multi_network_policy_enabled_flag= Dec 10 15:47:21 crc kubenswrapper[5114]: if [[ "false" == "true" ]]; then Dec 10 15:47:21 crc kubenswrapper[5114]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Dec 10 15:47:21 crc kubenswrapper[5114]: fi Dec 10 15:47:21 crc kubenswrapper[5114]: Dec 10 15:47:21 crc kubenswrapper[5114]: # Enable admin network policy if configured (control-plane always full mode) Dec 10 15:47:21 crc kubenswrapper[5114]: admin_network_policy_enabled_flag= Dec 10 15:47:21 crc kubenswrapper[5114]: if [[ "true" == "true" ]]; then Dec 10 15:47:21 crc kubenswrapper[5114]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Dec 10 15:47:21 crc kubenswrapper[5114]: fi Dec 10 15:47:21 crc kubenswrapper[5114]: Dec 10 15:47:21 crc kubenswrapper[5114]: if [ "shared" == "shared" ]; then Dec 10 15:47:21 crc kubenswrapper[5114]: gateway_mode_flags="--gateway-mode shared" Dec 10 15:47:21 crc kubenswrapper[5114]: elif [ "shared" == "local" ]; then Dec 10 15:47:21 crc kubenswrapper[5114]: gateway_mode_flags="--gateway-mode local" Dec 10 15:47:21 crc kubenswrapper[5114]: else Dec 10 15:47:21 crc kubenswrapper[5114]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Dec 10 15:47:21 crc kubenswrapper[5114]: exit 1 Dec 10 15:47:21 crc kubenswrapper[5114]: fi Dec 10 15:47:21 crc kubenswrapper[5114]: Dec 10 15:47:21 crc kubenswrapper[5114]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Dec 10 15:47:21 crc kubenswrapper[5114]: exec /usr/bin/ovnkube \ Dec 10 15:47:21 crc kubenswrapper[5114]: --enable-interconnect \ Dec 10 15:47:21 crc kubenswrapper[5114]: --init-cluster-manager "${K8S_NODE}" \ Dec 10 15:47:21 crc kubenswrapper[5114]: --config-file=/run/ovnkube-config/ovnkube.conf \ Dec 10 15:47:21 crc kubenswrapper[5114]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Dec 10 15:47:21 crc kubenswrapper[5114]: --metrics-bind-address "127.0.0.1:29108" \ Dec 10 15:47:21 crc kubenswrapper[5114]: --metrics-enable-pprof \ Dec 10 15:47:21 crc kubenswrapper[5114]: --metrics-enable-config-duration \ Dec 10 15:47:21 crc kubenswrapper[5114]: ${ovn_v4_join_subnet_opt} \ Dec 10 15:47:21 crc kubenswrapper[5114]: ${ovn_v6_join_subnet_opt} \ Dec 10 15:47:21 crc kubenswrapper[5114]: ${ovn_v4_transit_switch_subnet_opt} \ Dec 10 15:47:21 crc kubenswrapper[5114]: ${ovn_v6_transit_switch_subnet_opt} \ Dec 10 15:47:21 crc kubenswrapper[5114]: ${dns_name_resolver_enabled_flag} \ Dec 10 15:47:21 crc kubenswrapper[5114]: ${persistent_ips_enabled_flag} \ Dec 10 15:47:21 crc kubenswrapper[5114]: ${multi_network_enabled_flag} \ Dec 10 15:47:21 crc kubenswrapper[5114]: ${network_segmentation_enabled_flag} \ Dec 10 15:47:21 crc kubenswrapper[5114]: ${gateway_mode_flags} \ Dec 10 15:47:21 crc kubenswrapper[5114]: ${route_advertisements_enable_flag} \ Dec 10 15:47:21 crc kubenswrapper[5114]: ${preconfigured_udn_addresses_enable_flag} \ Dec 10 15:47:21 crc kubenswrapper[5114]: --enable-egress-ip=true \ Dec 10 15:47:21 crc kubenswrapper[5114]: --enable-egress-firewall=true \ Dec 10 15:47:21 crc kubenswrapper[5114]: --enable-egress-qos=true \ Dec 10 15:47:21 crc kubenswrapper[5114]: --enable-egress-service=true \ Dec 10 15:47:21 crc kubenswrapper[5114]: --enable-multicast \ Dec 10 15:47:21 crc kubenswrapper[5114]: --enable-multi-external-gateway=true \ Dec 10 15:47:21 crc kubenswrapper[5114]: ${multi_network_policy_enabled_flag} \ Dec 10 15:47:21 crc kubenswrapper[5114]: ${admin_network_policy_enabled_flag} Dec 10 15:47:21 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zkm4v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-79jfj_openshift-ovn-kubernetes(89d5aad2-7968-4ff9-a9fa-50a133a77df8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:21 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:21 crc kubenswrapper[5114]: E1210 15:47:21.983780 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" podUID="89d5aad2-7968-4ff9-a9fa-50a133a77df8" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.002899 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbl48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3e165c-439d-4282-b1e7-179dca439343\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbl48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.038923 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cddacc92-81b7-4948-93c5-5c47e15a9d41\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://82cf7cb8d12a0390623c03e2a919f8f30da8ac13d60bbaaca7bd32778e9816e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8822b68284631476f7526c5a6629b3cbe113320b8716837d4be7ed679ea64b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d65e5ca10eda1aed2b331dff87ea726c9ba50cfbb47bf07c74e0ce4d6d5b99b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bf99e2dd5c01828fb3db803c3d59c571d32f320bec0325579c1510965bea01ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf99e2dd5c01828fb3db803c3d59c571d32f320bec0325579c1510965bea01ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.080405 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.119241 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b38ac556-07b2-4e25-9595-6adae4fcecb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g9ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g9ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pvhhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.161906 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-lg6m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7c683ba-536f-45e5-89b0-fe14989cad13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfxbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lg6m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.174575 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.174695 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.174732 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.174910 5114 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.174989 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-10 15:47:23.174972672 +0000 UTC m=+68.895773849 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.175405 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:47:23.175393132 +0000 UTC m=+68.896194319 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.175553 5114 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.175635 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-10 15:47:23.175623068 +0000 UTC m=+68.896424245 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.199301 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gjs2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d8f4a9-0b40-486c-ac70-597d1fab05c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtlfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtlfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gjs2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.241839 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-49rgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"379e5b28-21b4-4727-a60f-0fad71bf89fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2wz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-49rgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.275566 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48d8f4a9-0b40-486c-ac70-597d1fab05c1-metrics-certs\") pod \"network-metrics-daemon-gjs2g\" (UID: \"48d8f4a9-0b40-486c-ac70-597d1fab05c1\") " pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.275830 5114 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.275925 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48d8f4a9-0b40-486c-ac70-597d1fab05c1-metrics-certs podName:48d8f4a9-0b40-486c-ac70-597d1fab05c1 nodeName:}" failed. No retries permitted until 2025-12-10 15:47:23.27590565 +0000 UTC m=+68.996706827 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/48d8f4a9-0b40-486c-ac70-597d1fab05c1-metrics-certs") pod "network-metrics-daemon-gjs2g" (UID: "48d8f4a9-0b40-486c-ac70-597d1fab05c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.275970 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.276017 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.276034 5114 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.276111 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-10 15:47:23.276091314 +0000 UTC m=+68.996892491 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.275645 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.276749 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.276986 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.277007 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.277040 5114 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.277093 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-10 15:47:23.277080219 +0000 UTC m=+68.997881396 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.279189 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sg27x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a54715ec-382b-4bb8-bef2-f125ee0bae2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl62h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sg27x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.319990 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89d5aad2-7968-4ff9-a9fa-50a133a77df8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkm4v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkm4v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-79jfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.357742 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fa5e9e-e71a-458f-88e7-57d296462452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b63509d96fe3793fb1dffe2943da9a38a875dd373fbad85638d39878168af249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://108af1094b4ecac73d954933b32171f5e697d11d78490d831db63f315177de7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://108af1094b4ecac73d954933b32171f5e697d11d78490d831db63f315177de7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.573015 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.573848 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.575186 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.576150 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.578169 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.579782 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.581310 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.582555 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.583136 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.584398 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.585167 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.586628 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.587292 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.588748 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.589174 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.589801 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.590818 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.591758 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.592989 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.593805 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.594678 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.596683 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.597710 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.598611 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.599927 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.600938 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.602671 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.603867 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.606171 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.607402 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.608658 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.610127 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.611534 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.613018 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.613980 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.615124 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.616002 5114 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.616106 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.619433 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.620374 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.621692 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.622495 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.623327 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.624157 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.625460 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.625905 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.626546 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.627718 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.628488 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.629526 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.630147 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.631111 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.631854 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.633072 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.635454 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.636082 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.637233 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.637989 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.639245 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.639393 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.836196 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"acad417964c05a99524067ddb95a60b966f7790623a92c0159a2d11bd74f8331"} Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.836948 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"fb936fd9626c598dc19619e492b47b87567e934de2980d5f8b73358b64ee7fae"} Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.837693 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" event={"ID":"5bef68a8-63de-4992-87b6-3dc6c70f5a1d","Type":"ContainerStarted","Data":"a199cf1da8fb7790abbb3c746b8ca2bfcd2d855529f7600f1eee455a9ec8496b"} Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.838640 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:22 crc kubenswrapper[5114]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Dec 10 15:47:22 crc kubenswrapper[5114]: set -o allexport Dec 10 15:47:22 crc kubenswrapper[5114]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 10 15:47:22 crc kubenswrapper[5114]: source /etc/kubernetes/apiserver-url.env Dec 10 15:47:22 crc kubenswrapper[5114]: else Dec 10 15:47:22 crc kubenswrapper[5114]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 10 15:47:22 crc kubenswrapper[5114]: exit 1 Dec 10 15:47:22 crc kubenswrapper[5114]: fi Dec 10 15:47:22 crc kubenswrapper[5114]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 10 15:47:22 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:22 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.838688 5114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.838780 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" event={"ID":"b38ac556-07b2-4e25-9595-6adae4fcecb7","Type":"ContainerStarted","Data":"8405b6b0f18ce90d94d73840a5d2ed017e153c4d9e67ff541cc3c6023b54e5f5"} Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.839194 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:22 crc kubenswrapper[5114]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 10 15:47:22 crc kubenswrapper[5114]: apiVersion: v1 Dec 10 15:47:22 crc kubenswrapper[5114]: clusters: Dec 10 15:47:22 crc kubenswrapper[5114]: - cluster: Dec 10 15:47:22 crc kubenswrapper[5114]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 10 15:47:22 crc kubenswrapper[5114]: server: https://api-int.crc.testing:6443 Dec 10 15:47:22 crc kubenswrapper[5114]: name: default-cluster Dec 10 15:47:22 crc kubenswrapper[5114]: contexts: Dec 10 15:47:22 crc kubenswrapper[5114]: - context: Dec 10 15:47:22 crc kubenswrapper[5114]: cluster: default-cluster Dec 10 15:47:22 crc kubenswrapper[5114]: namespace: default Dec 10 15:47:22 crc kubenswrapper[5114]: user: default-auth Dec 10 15:47:22 crc kubenswrapper[5114]: name: default-context Dec 10 15:47:22 crc kubenswrapper[5114]: current-context: default-context Dec 10 15:47:22 crc kubenswrapper[5114]: kind: Config Dec 10 15:47:22 crc kubenswrapper[5114]: preferences: {} Dec 10 15:47:22 crc kubenswrapper[5114]: users: Dec 10 15:47:22 crc kubenswrapper[5114]: - name: default-auth Dec 10 15:47:22 crc kubenswrapper[5114]: user: Dec 10 15:47:22 crc kubenswrapper[5114]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 10 15:47:22 crc kubenswrapper[5114]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 10 15:47:22 crc kubenswrapper[5114]: EOF Dec 10 15:47:22 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xgklm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-bgfnl_openshift-ovn-kubernetes(5bef68a8-63de-4992-87b6-3dc6c70f5a1d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:22 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.839505 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lg6m5" event={"ID":"e7c683ba-536f-45e5-89b0-fe14989cad13","Type":"ContainerStarted","Data":"857e236d8e95408dd720ee6b728ed964327eb80f0265c864036079ea941ad944"} Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.839760 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.839880 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.840110 5114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8g9ft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-pvhhc_openshift-machine-config-operator(b38ac556-07b2-4e25-9595-6adae4fcecb7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.840250 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.840349 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"852212d43ae33e47138352f4bf9791eaafbf044d422b0389b2b6a13ea7b080b1"} Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.841061 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:22 crc kubenswrapper[5114]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Dec 10 15:47:22 crc kubenswrapper[5114]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Dec 10 15:47:22 crc kubenswrapper[5114]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sfxbp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-lg6m5_openshift-multus(e7c683ba-536f-45e5-89b0-fe14989cad13): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:22 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.841195 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" event={"ID":"89d5aad2-7968-4ff9-a9fa-50a133a77df8","Type":"ContainerStarted","Data":"715ef84ad14f85866a9983d9bff96f891290de463b18f5e8b09f2d89451140e8"} Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.842000 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:22 crc kubenswrapper[5114]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 10 15:47:22 crc kubenswrapper[5114]: if [[ -f "/env/_master" ]]; then Dec 10 15:47:22 crc kubenswrapper[5114]: set -o allexport Dec 10 15:47:22 crc kubenswrapper[5114]: source "/env/_master" Dec 10 15:47:22 crc kubenswrapper[5114]: set +o allexport Dec 10 15:47:22 crc kubenswrapper[5114]: fi Dec 10 15:47:22 crc kubenswrapper[5114]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 10 15:47:22 crc kubenswrapper[5114]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 10 15:47:22 crc kubenswrapper[5114]: ho_enable="--enable-hybrid-overlay" Dec 10 15:47:22 crc kubenswrapper[5114]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 10 15:47:22 crc kubenswrapper[5114]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 10 15:47:22 crc kubenswrapper[5114]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 10 15:47:22 crc kubenswrapper[5114]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 10 15:47:22 crc kubenswrapper[5114]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 10 15:47:22 crc kubenswrapper[5114]: --webhook-host=127.0.0.1 \ Dec 10 15:47:22 crc kubenswrapper[5114]: --webhook-port=9743 \ Dec 10 15:47:22 crc kubenswrapper[5114]: ${ho_enable} \ Dec 10 15:47:22 crc kubenswrapper[5114]: --enable-interconnect \ Dec 10 15:47:22 crc kubenswrapper[5114]: --disable-approver \ Dec 10 15:47:22 crc kubenswrapper[5114]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 10 15:47:22 crc kubenswrapper[5114]: --wait-for-kubernetes-api=200s \ Dec 10 15:47:22 crc kubenswrapper[5114]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 10 15:47:22 crc kubenswrapper[5114]: --loglevel="${LOGLEVEL}" Dec 10 15:47:22 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:22 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.842003 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wbl48" event={"ID":"3a3e165c-439d-4282-b1e7-179dca439343","Type":"ContainerStarted","Data":"ae57bcfda7fde3f116d412ad5c387292cff2832904c7cde3370bcc0aa1aa98f1"} Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.842353 5114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8g9ft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-pvhhc_openshift-machine-config-operator(b38ac556-07b2-4e25-9595-6adae4fcecb7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.842400 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-lg6m5" podUID="e7c683ba-536f-45e5-89b0-fe14989cad13" Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.842460 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:22 crc kubenswrapper[5114]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Dec 10 15:47:22 crc kubenswrapper[5114]: set -euo pipefail Dec 10 15:47:22 crc kubenswrapper[5114]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Dec 10 15:47:22 crc kubenswrapper[5114]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Dec 10 15:47:22 crc kubenswrapper[5114]: # As the secret mount is optional we must wait for the files to be present. Dec 10 15:47:22 crc kubenswrapper[5114]: # The service is created in monitor.yaml and this is created in sdn.yaml. Dec 10 15:47:22 crc kubenswrapper[5114]: TS=$(date +%s) Dec 10 15:47:22 crc kubenswrapper[5114]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Dec 10 15:47:22 crc kubenswrapper[5114]: HAS_LOGGED_INFO=0 Dec 10 15:47:22 crc kubenswrapper[5114]: Dec 10 15:47:22 crc kubenswrapper[5114]: log_missing_certs(){ Dec 10 15:47:22 crc kubenswrapper[5114]: CUR_TS=$(date +%s) Dec 10 15:47:22 crc kubenswrapper[5114]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Dec 10 15:47:22 crc kubenswrapper[5114]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Dec 10 15:47:22 crc kubenswrapper[5114]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Dec 10 15:47:22 crc kubenswrapper[5114]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Dec 10 15:47:22 crc kubenswrapper[5114]: HAS_LOGGED_INFO=1 Dec 10 15:47:22 crc kubenswrapper[5114]: fi Dec 10 15:47:22 crc kubenswrapper[5114]: } Dec 10 15:47:22 crc kubenswrapper[5114]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Dec 10 15:47:22 crc kubenswrapper[5114]: log_missing_certs Dec 10 15:47:22 crc kubenswrapper[5114]: sleep 5 Dec 10 15:47:22 crc kubenswrapper[5114]: done Dec 10 15:47:22 crc kubenswrapper[5114]: Dec 10 15:47:22 crc kubenswrapper[5114]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Dec 10 15:47:22 crc kubenswrapper[5114]: exec /usr/bin/kube-rbac-proxy \ Dec 10 15:47:22 crc kubenswrapper[5114]: --logtostderr \ Dec 10 15:47:22 crc kubenswrapper[5114]: --secure-listen-address=:9108 \ Dec 10 15:47:22 crc kubenswrapper[5114]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Dec 10 15:47:22 crc kubenswrapper[5114]: --upstream=http://127.0.0.1:29108/ \ Dec 10 15:47:22 crc kubenswrapper[5114]: --tls-private-key-file=${TLS_PK} \ Dec 10 15:47:22 crc kubenswrapper[5114]: --tls-cert-file=${TLS_CERT} Dec 10 15:47:22 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zkm4v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-79jfj_openshift-ovn-kubernetes(89d5aad2-7968-4ff9-a9fa-50a133a77df8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:22 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.843240 5114 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j9xxc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-wbl48_openshift-multus(3a3e165c-439d-4282-b1e7-179dca439343): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.844261 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" podUID="b38ac556-07b2-4e25-9595-6adae4fcecb7" Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.844321 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-wbl48" podUID="3a3e165c-439d-4282-b1e7-179dca439343" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.844405 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-sg27x" event={"ID":"a54715ec-382b-4bb8-bef2-f125ee0bae2b","Type":"ContainerStarted","Data":"d55b5d1a0f756702f3ac1b0514f4e6524c696c9aafaf7eef0b68cd237ff88ea0"} Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.845100 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:22 crc kubenswrapper[5114]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 10 15:47:22 crc kubenswrapper[5114]: if [[ -f "/env/_master" ]]; then Dec 10 15:47:22 crc kubenswrapper[5114]: set -o allexport Dec 10 15:47:22 crc kubenswrapper[5114]: source "/env/_master" Dec 10 15:47:22 crc kubenswrapper[5114]: set +o allexport Dec 10 15:47:22 crc kubenswrapper[5114]: fi Dec 10 15:47:22 crc kubenswrapper[5114]: Dec 10 15:47:22 crc kubenswrapper[5114]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 10 15:47:22 crc kubenswrapper[5114]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 10 15:47:22 crc kubenswrapper[5114]: --disable-webhook \ Dec 10 15:47:22 crc kubenswrapper[5114]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 10 15:47:22 crc kubenswrapper[5114]: --loglevel="${LOGLEVEL}" Dec 10 15:47:22 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:22 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.845113 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:22 crc kubenswrapper[5114]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 10 15:47:22 crc kubenswrapper[5114]: if [[ -f "/env/_master" ]]; then Dec 10 15:47:22 crc kubenswrapper[5114]: set -o allexport Dec 10 15:47:22 crc kubenswrapper[5114]: source "/env/_master" Dec 10 15:47:22 crc kubenswrapper[5114]: set +o allexport Dec 10 15:47:22 crc kubenswrapper[5114]: fi Dec 10 15:47:22 crc kubenswrapper[5114]: Dec 10 15:47:22 crc kubenswrapper[5114]: ovn_v4_join_subnet_opt= Dec 10 15:47:22 crc kubenswrapper[5114]: if [[ "" != "" ]]; then Dec 10 15:47:22 crc kubenswrapper[5114]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Dec 10 15:47:22 crc kubenswrapper[5114]: fi Dec 10 15:47:22 crc kubenswrapper[5114]: ovn_v6_join_subnet_opt= Dec 10 15:47:22 crc kubenswrapper[5114]: if [[ "" != "" ]]; then Dec 10 15:47:22 crc kubenswrapper[5114]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Dec 10 15:47:22 crc kubenswrapper[5114]: fi Dec 10 15:47:22 crc kubenswrapper[5114]: Dec 10 15:47:22 crc kubenswrapper[5114]: ovn_v4_transit_switch_subnet_opt= Dec 10 15:47:22 crc kubenswrapper[5114]: if [[ "" != "" ]]; then Dec 10 15:47:22 crc kubenswrapper[5114]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Dec 10 15:47:22 crc kubenswrapper[5114]: fi Dec 10 15:47:22 crc kubenswrapper[5114]: ovn_v6_transit_switch_subnet_opt= Dec 10 15:47:22 crc kubenswrapper[5114]: if [[ "" != "" ]]; then Dec 10 15:47:22 crc kubenswrapper[5114]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Dec 10 15:47:22 crc kubenswrapper[5114]: fi Dec 10 15:47:22 crc kubenswrapper[5114]: Dec 10 15:47:22 crc kubenswrapper[5114]: dns_name_resolver_enabled_flag= Dec 10 15:47:22 crc kubenswrapper[5114]: if [[ "false" == "true" ]]; then Dec 10 15:47:22 crc kubenswrapper[5114]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Dec 10 15:47:22 crc kubenswrapper[5114]: fi Dec 10 15:47:22 crc kubenswrapper[5114]: Dec 10 15:47:22 crc kubenswrapper[5114]: persistent_ips_enabled_flag="--enable-persistent-ips" Dec 10 15:47:22 crc kubenswrapper[5114]: Dec 10 15:47:22 crc kubenswrapper[5114]: # This is needed so that converting clusters from GA to TP Dec 10 15:47:22 crc kubenswrapper[5114]: # will rollout control plane pods as well Dec 10 15:47:22 crc kubenswrapper[5114]: network_segmentation_enabled_flag= Dec 10 15:47:22 crc kubenswrapper[5114]: multi_network_enabled_flag= Dec 10 15:47:22 crc kubenswrapper[5114]: if [[ "true" == "true" ]]; then Dec 10 15:47:22 crc kubenswrapper[5114]: multi_network_enabled_flag="--enable-multi-network" Dec 10 15:47:22 crc kubenswrapper[5114]: fi Dec 10 15:47:22 crc kubenswrapper[5114]: if [[ "true" == "true" ]]; then Dec 10 15:47:22 crc kubenswrapper[5114]: if [[ "true" != "true" ]]; then Dec 10 15:47:22 crc kubenswrapper[5114]: multi_network_enabled_flag="--enable-multi-network" Dec 10 15:47:22 crc kubenswrapper[5114]: fi Dec 10 15:47:22 crc kubenswrapper[5114]: network_segmentation_enabled_flag="--enable-network-segmentation" Dec 10 15:47:22 crc kubenswrapper[5114]: fi Dec 10 15:47:22 crc kubenswrapper[5114]: Dec 10 15:47:22 crc kubenswrapper[5114]: route_advertisements_enable_flag= Dec 10 15:47:22 crc kubenswrapper[5114]: if [[ "false" == "true" ]]; then Dec 10 15:47:22 crc kubenswrapper[5114]: route_advertisements_enable_flag="--enable-route-advertisements" Dec 10 15:47:22 crc kubenswrapper[5114]: fi Dec 10 15:47:22 crc kubenswrapper[5114]: Dec 10 15:47:22 crc kubenswrapper[5114]: preconfigured_udn_addresses_enable_flag= Dec 10 15:47:22 crc kubenswrapper[5114]: if [[ "false" == "true" ]]; then Dec 10 15:47:22 crc kubenswrapper[5114]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Dec 10 15:47:22 crc kubenswrapper[5114]: fi Dec 10 15:47:22 crc kubenswrapper[5114]: Dec 10 15:47:22 crc kubenswrapper[5114]: # Enable multi-network policy if configured (control-plane always full mode) Dec 10 15:47:22 crc kubenswrapper[5114]: multi_network_policy_enabled_flag= Dec 10 15:47:22 crc kubenswrapper[5114]: if [[ "false" == "true" ]]; then Dec 10 15:47:22 crc kubenswrapper[5114]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Dec 10 15:47:22 crc kubenswrapper[5114]: fi Dec 10 15:47:22 crc kubenswrapper[5114]: Dec 10 15:47:22 crc kubenswrapper[5114]: # Enable admin network policy if configured (control-plane always full mode) Dec 10 15:47:22 crc kubenswrapper[5114]: admin_network_policy_enabled_flag= Dec 10 15:47:22 crc kubenswrapper[5114]: if [[ "true" == "true" ]]; then Dec 10 15:47:22 crc kubenswrapper[5114]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Dec 10 15:47:22 crc kubenswrapper[5114]: fi Dec 10 15:47:22 crc kubenswrapper[5114]: Dec 10 15:47:22 crc kubenswrapper[5114]: if [ "shared" == "shared" ]; then Dec 10 15:47:22 crc kubenswrapper[5114]: gateway_mode_flags="--gateway-mode shared" Dec 10 15:47:22 crc kubenswrapper[5114]: elif [ "shared" == "local" ]; then Dec 10 15:47:22 crc kubenswrapper[5114]: gateway_mode_flags="--gateway-mode local" Dec 10 15:47:22 crc kubenswrapper[5114]: else Dec 10 15:47:22 crc kubenswrapper[5114]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Dec 10 15:47:22 crc kubenswrapper[5114]: exit 1 Dec 10 15:47:22 crc kubenswrapper[5114]: fi Dec 10 15:47:22 crc kubenswrapper[5114]: Dec 10 15:47:22 crc kubenswrapper[5114]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Dec 10 15:47:22 crc kubenswrapper[5114]: exec /usr/bin/ovnkube \ Dec 10 15:47:22 crc kubenswrapper[5114]: --enable-interconnect \ Dec 10 15:47:22 crc kubenswrapper[5114]: --init-cluster-manager "${K8S_NODE}" \ Dec 10 15:47:22 crc kubenswrapper[5114]: --config-file=/run/ovnkube-config/ovnkube.conf \ Dec 10 15:47:22 crc kubenswrapper[5114]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Dec 10 15:47:22 crc kubenswrapper[5114]: --metrics-bind-address "127.0.0.1:29108" \ Dec 10 15:47:22 crc kubenswrapper[5114]: --metrics-enable-pprof \ Dec 10 15:47:22 crc kubenswrapper[5114]: --metrics-enable-config-duration \ Dec 10 15:47:22 crc kubenswrapper[5114]: ${ovn_v4_join_subnet_opt} \ Dec 10 15:47:22 crc kubenswrapper[5114]: ${ovn_v6_join_subnet_opt} \ Dec 10 15:47:22 crc kubenswrapper[5114]: ${ovn_v4_transit_switch_subnet_opt} \ Dec 10 15:47:22 crc kubenswrapper[5114]: ${ovn_v6_transit_switch_subnet_opt} \ Dec 10 15:47:22 crc kubenswrapper[5114]: ${dns_name_resolver_enabled_flag} \ Dec 10 15:47:22 crc kubenswrapper[5114]: ${persistent_ips_enabled_flag} \ Dec 10 15:47:22 crc kubenswrapper[5114]: ${multi_network_enabled_flag} \ Dec 10 15:47:22 crc kubenswrapper[5114]: ${network_segmentation_enabled_flag} \ Dec 10 15:47:22 crc kubenswrapper[5114]: ${gateway_mode_flags} \ Dec 10 15:47:22 crc kubenswrapper[5114]: ${route_advertisements_enable_flag} \ Dec 10 15:47:22 crc kubenswrapper[5114]: ${preconfigured_udn_addresses_enable_flag} \ Dec 10 15:47:22 crc kubenswrapper[5114]: --enable-egress-ip=true \ Dec 10 15:47:22 crc kubenswrapper[5114]: --enable-egress-firewall=true \ Dec 10 15:47:22 crc kubenswrapper[5114]: --enable-egress-qos=true \ Dec 10 15:47:22 crc kubenswrapper[5114]: --enable-egress-service=true \ Dec 10 15:47:22 crc kubenswrapper[5114]: --enable-multicast \ Dec 10 15:47:22 crc kubenswrapper[5114]: --enable-multi-external-gateway=true \ Dec 10 15:47:22 crc kubenswrapper[5114]: ${multi_network_policy_enabled_flag} \ Dec 10 15:47:22 crc kubenswrapper[5114]: ${admin_network_policy_enabled_flag} Dec 10 15:47:22 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zkm4v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-79jfj_openshift-ovn-kubernetes(89d5aad2-7968-4ff9-a9fa-50a133a77df8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:22 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.845667 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:22 crc kubenswrapper[5114]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Dec 10 15:47:22 crc kubenswrapper[5114]: while [ true ]; Dec 10 15:47:22 crc kubenswrapper[5114]: do Dec 10 15:47:22 crc kubenswrapper[5114]: for f in $(ls /tmp/serviceca); do Dec 10 15:47:22 crc kubenswrapper[5114]: echo $f Dec 10 15:47:22 crc kubenswrapper[5114]: ca_file_path="/tmp/serviceca/${f}" Dec 10 15:47:22 crc kubenswrapper[5114]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Dec 10 15:47:22 crc kubenswrapper[5114]: reg_dir_path="/etc/docker/certs.d/${f}" Dec 10 15:47:22 crc kubenswrapper[5114]: if [ -e "${reg_dir_path}" ]; then Dec 10 15:47:22 crc kubenswrapper[5114]: cp -u $ca_file_path $reg_dir_path/ca.crt Dec 10 15:47:22 crc kubenswrapper[5114]: else Dec 10 15:47:22 crc kubenswrapper[5114]: mkdir $reg_dir_path Dec 10 15:47:22 crc kubenswrapper[5114]: cp $ca_file_path $reg_dir_path/ca.crt Dec 10 15:47:22 crc kubenswrapper[5114]: fi Dec 10 15:47:22 crc kubenswrapper[5114]: done Dec 10 15:47:22 crc kubenswrapper[5114]: for d in $(ls /etc/docker/certs.d); do Dec 10 15:47:22 crc kubenswrapper[5114]: echo $d Dec 10 15:47:22 crc kubenswrapper[5114]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Dec 10 15:47:22 crc kubenswrapper[5114]: reg_conf_path="/tmp/serviceca/${dp}" Dec 10 15:47:22 crc kubenswrapper[5114]: if [ ! -e "${reg_conf_path}" ]; then Dec 10 15:47:22 crc kubenswrapper[5114]: rm -rf /etc/docker/certs.d/$d Dec 10 15:47:22 crc kubenswrapper[5114]: fi Dec 10 15:47:22 crc kubenswrapper[5114]: done Dec 10 15:47:22 crc kubenswrapper[5114]: sleep 60 & wait ${!} Dec 10 15:47:22 crc kubenswrapper[5114]: done Dec 10 15:47:22 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xl62h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-sg27x_openshift-image-registry(a54715ec-382b-4bb8-bef2-f125ee0bae2b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:22 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.846618 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.846644 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-49rgv" event={"ID":"379e5b28-21b4-4727-a60f-0fad71bf89fa","Type":"ContainerStarted","Data":"68ed71554505d070e4fc7e2ad6e6d9973c4c1c069894e9be8f9bbf05df9de042"} Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.847405 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-sg27x" podUID="a54715ec-382b-4bb8-bef2-f125ee0bae2b" Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.847482 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" podUID="89d5aad2-7968-4ff9-a9fa-50a133a77df8" Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.847587 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:22 crc kubenswrapper[5114]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Dec 10 15:47:22 crc kubenswrapper[5114]: set -uo pipefail Dec 10 15:47:22 crc kubenswrapper[5114]: Dec 10 15:47:22 crc kubenswrapper[5114]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 10 15:47:22 crc kubenswrapper[5114]: Dec 10 15:47:22 crc kubenswrapper[5114]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 10 15:47:22 crc kubenswrapper[5114]: HOSTS_FILE="/etc/hosts" Dec 10 15:47:22 crc kubenswrapper[5114]: TEMP_FILE="/tmp/hosts.tmp" Dec 10 15:47:22 crc kubenswrapper[5114]: Dec 10 15:47:22 crc kubenswrapper[5114]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 10 15:47:22 crc kubenswrapper[5114]: Dec 10 15:47:22 crc kubenswrapper[5114]: # Make a temporary file with the old hosts file's attributes. Dec 10 15:47:22 crc kubenswrapper[5114]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 10 15:47:22 crc kubenswrapper[5114]: echo "Failed to preserve hosts file. Exiting." Dec 10 15:47:22 crc kubenswrapper[5114]: exit 1 Dec 10 15:47:22 crc kubenswrapper[5114]: fi Dec 10 15:47:22 crc kubenswrapper[5114]: Dec 10 15:47:22 crc kubenswrapper[5114]: while true; do Dec 10 15:47:22 crc kubenswrapper[5114]: declare -A svc_ips Dec 10 15:47:22 crc kubenswrapper[5114]: for svc in "${services[@]}"; do Dec 10 15:47:22 crc kubenswrapper[5114]: # Fetch service IP from cluster dns if present. We make several tries Dec 10 15:47:22 crc kubenswrapper[5114]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 10 15:47:22 crc kubenswrapper[5114]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 10 15:47:22 crc kubenswrapper[5114]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 10 15:47:22 crc kubenswrapper[5114]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 10 15:47:22 crc kubenswrapper[5114]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 10 15:47:22 crc kubenswrapper[5114]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 10 15:47:22 crc kubenswrapper[5114]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 10 15:47:22 crc kubenswrapper[5114]: for i in ${!cmds[*]} Dec 10 15:47:22 crc kubenswrapper[5114]: do Dec 10 15:47:22 crc kubenswrapper[5114]: ips=($(eval "${cmds[i]}")) Dec 10 15:47:22 crc kubenswrapper[5114]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 10 15:47:22 crc kubenswrapper[5114]: svc_ips["${svc}"]="${ips[@]}" Dec 10 15:47:22 crc kubenswrapper[5114]: break Dec 10 15:47:22 crc kubenswrapper[5114]: fi Dec 10 15:47:22 crc kubenswrapper[5114]: done Dec 10 15:47:22 crc kubenswrapper[5114]: done Dec 10 15:47:22 crc kubenswrapper[5114]: Dec 10 15:47:22 crc kubenswrapper[5114]: # Update /etc/hosts only if we get valid service IPs Dec 10 15:47:22 crc kubenswrapper[5114]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 10 15:47:22 crc kubenswrapper[5114]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 10 15:47:22 crc kubenswrapper[5114]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 10 15:47:22 crc kubenswrapper[5114]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 10 15:47:22 crc kubenswrapper[5114]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 10 15:47:22 crc kubenswrapper[5114]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 10 15:47:22 crc kubenswrapper[5114]: sleep 60 & wait Dec 10 15:47:22 crc kubenswrapper[5114]: continue Dec 10 15:47:22 crc kubenswrapper[5114]: fi Dec 10 15:47:22 crc kubenswrapper[5114]: Dec 10 15:47:22 crc kubenswrapper[5114]: # Append resolver entries for services Dec 10 15:47:22 crc kubenswrapper[5114]: rc=0 Dec 10 15:47:22 crc kubenswrapper[5114]: for svc in "${!svc_ips[@]}"; do Dec 10 15:47:22 crc kubenswrapper[5114]: for ip in ${svc_ips[${svc}]}; do Dec 10 15:47:22 crc kubenswrapper[5114]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 10 15:47:22 crc kubenswrapper[5114]: done Dec 10 15:47:22 crc kubenswrapper[5114]: done Dec 10 15:47:22 crc kubenswrapper[5114]: if [[ $rc -ne 0 ]]; then Dec 10 15:47:22 crc kubenswrapper[5114]: sleep 60 & wait Dec 10 15:47:22 crc kubenswrapper[5114]: continue Dec 10 15:47:22 crc kubenswrapper[5114]: fi Dec 10 15:47:22 crc kubenswrapper[5114]: Dec 10 15:47:22 crc kubenswrapper[5114]: Dec 10 15:47:22 crc kubenswrapper[5114]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 10 15:47:22 crc kubenswrapper[5114]: # Replace /etc/hosts with our modified version if needed Dec 10 15:47:22 crc kubenswrapper[5114]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 10 15:47:22 crc kubenswrapper[5114]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 10 15:47:22 crc kubenswrapper[5114]: fi Dec 10 15:47:22 crc kubenswrapper[5114]: sleep 60 & wait Dec 10 15:47:22 crc kubenswrapper[5114]: unset svc_ips Dec 10 15:47:22 crc kubenswrapper[5114]: done Dec 10 15:47:22 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j2wz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-49rgv_openshift-dns(379e5b28-21b4-4727-a60f-0fad71bf89fa): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:22 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:22 crc kubenswrapper[5114]: E1210 15:47:22.848632 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-49rgv" podUID="379e5b28-21b4-4727-a60f-0fad71bf89fa" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.848806 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b38ac556-07b2-4e25-9595-6adae4fcecb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g9ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g9ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pvhhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.856228 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-lg6m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7c683ba-536f-45e5-89b0-fe14989cad13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfxbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lg6m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.861356 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gjs2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d8f4a9-0b40-486c-ac70-597d1fab05c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtlfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtlfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gjs2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.868057 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-49rgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"379e5b28-21b4-4727-a60f-0fad71bf89fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2wz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-49rgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.873812 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sg27x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a54715ec-382b-4bb8-bef2-f125ee0bae2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl62h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sg27x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.879942 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89d5aad2-7968-4ff9-a9fa-50a133a77df8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkm4v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkm4v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-79jfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.892726 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fa5e9e-e71a-458f-88e7-57d296462452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b63509d96fe3793fb1dffe2943da9a38a875dd373fbad85638d39878168af249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://108af1094b4ecac73d954933b32171f5e697d11d78490d831db63f315177de7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://108af1094b4ecac73d954933b32171f5e697d11d78490d831db63f315177de7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.901297 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.909168 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.921994 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bgfnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.931881 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4f07611-baa7-42a7-8607-306ed57fb75c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://800d1520c7107344f8b6d771d0fecfb9ca2644d8efe597cabd69c5de72a571ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ec7a41d072aa02f59def36f4c2802872ef70cbd48046c3e3d6f6ccd6b254c53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c19e0260e8980b12b59f394a8355cee2eee1dc159e14081a0ff23cebdd4e9f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1daca1262ac174a242cff74011ab4da1c00a8caaf4bc44b58af5400ae24d3226\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.950417 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14d2b4c9-40f0-4dcb-ad8c-0fe4a5304563\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://85e77e659fccf9ba6e2cc6e99afbafd6be1703e401429ba871243247e0c20a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://447746eb6e190728d80f154f34d6c4c3cd6a364d95c18a4c109e1a2d00fbcf27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://251a7ed18067c8bcbcbcb38700fe905a2a4ebf34fef9f02a6ffc9f78a334bc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43234809c1296bc87d3909492e145b0720e62cf92728f1f24baeac176f8cfc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://4654b1e58183f9508823b58dc37a09482feafd97c887cc56f9d1c793999ee516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://101e3958feb79a37918d043f01289b15aa43519052915151289b2df11a4c798e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://101e3958feb79a37918d043f01289b15aa43519052915151289b2df11a4c798e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://000c0ac3fe264d2edae20d00ae4b904a9c638f104925be4c2999a32625c2384e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://000c0ac3fe264d2edae20d00ae4b904a9c638f104925be4c2999a32625c2384e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://90da8daaae30e60295160aefe8748f6cf28eda2cd17d933569c0320aebc57f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90da8daaae30e60295160aefe8748f6cf28eda2cd17d933569c0320aebc57f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.961471 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e331166d-a33f-44c1-9a3e-f43cfee598a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://c9a7475ba48862dfcb11fe65264384be264b4b7acd30761bc650e70dd3a78abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7398b71862f7cfabefc5644c5d6b4924bbde47edadad7f240aa37599d2b3da9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://55ad03eb1a337191c414a5dbd0864a29632396ff234b68505a9a4b65c90d8eb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d79fc0ad78427693b9ef01519261c475c49b29ab8dc64210c09f22886b3dcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1c010c37667d5c045e43048e4405a03d43afd6ebe7774038d9d5a5c5bb8aaf4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T15:47:00Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1210 15:46:59.465586 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 15:46:59.465755 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1210 15:46:59.466800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823188907/tls.crt::/tmp/serving-cert-3823188907/tls.key\\\\\\\"\\\\nI1210 15:47:00.080067 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 15:47:00.081594 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 15:47:00.081609 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 15:47:00.081631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 15:47:00.081635 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 15:47:00.084952 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 15:47:00.084970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 15:47:00.084974 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 15:47:00.084979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 15:47:00.084982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 15:47:00.084984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 15:47:00.084987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 15:47:00.085095 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 15:47:00.088454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:47:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f8dd78b836cacc6ac7bee1a11730500c94192df5a045eb37ae1c137a3cc0ad6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7e3d3b6b0e188659783d2b384d22a05ba8962e4fa49cd4caae040921c9add613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e3d3b6b0e188659783d2b384d22a05ba8962e4fa49cd4caae040921c9add613\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.971735 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:22 crc kubenswrapper[5114]: I1210 15:47:22.981164 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:23 crc kubenswrapper[5114]: I1210 15:47:23.000912 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:23 crc kubenswrapper[5114]: I1210 15:47:23.040946 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbl48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3e165c-439d-4282-b1e7-179dca439343\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbl48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:23 crc kubenswrapper[5114]: I1210 15:47:23.079201 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cddacc92-81b7-4948-93c5-5c47e15a9d41\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://82cf7cb8d12a0390623c03e2a919f8f30da8ac13d60bbaaca7bd32778e9816e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8822b68284631476f7526c5a6629b3cbe113320b8716837d4be7ed679ea64b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d65e5ca10eda1aed2b331dff87ea726c9ba50cfbb47bf07c74e0ce4d6d5b99b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bf99e2dd5c01828fb3db803c3d59c571d32f320bec0325579c1510965bea01ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf99e2dd5c01828fb3db803c3d59c571d32f320bec0325579c1510965bea01ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:23 crc kubenswrapper[5114]: I1210 15:47:23.127949 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:23 crc kubenswrapper[5114]: I1210 15:47:23.162490 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cddacc92-81b7-4948-93c5-5c47e15a9d41\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://82cf7cb8d12a0390623c03e2a919f8f30da8ac13d60bbaaca7bd32778e9816e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8822b68284631476f7526c5a6629b3cbe113320b8716837d4be7ed679ea64b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d65e5ca10eda1aed2b331dff87ea726c9ba50cfbb47bf07c74e0ce4d6d5b99b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bf99e2dd5c01828fb3db803c3d59c571d32f320bec0325579c1510965bea01ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf99e2dd5c01828fb3db803c3d59c571d32f320bec0325579c1510965bea01ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:23 crc kubenswrapper[5114]: I1210 15:47:23.187574 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:47:23 crc kubenswrapper[5114]: E1210 15:47:23.187781 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:47:25.187750661 +0000 UTC m=+70.908551838 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:47:23 crc kubenswrapper[5114]: I1210 15:47:23.187891 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:47:23 crc kubenswrapper[5114]: I1210 15:47:23.187932 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:47:23 crc kubenswrapper[5114]: E1210 15:47:23.188069 5114 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 15:47:23 crc kubenswrapper[5114]: E1210 15:47:23.188149 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-10 15:47:25.188138251 +0000 UTC m=+70.908939498 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 15:47:23 crc kubenswrapper[5114]: E1210 15:47:23.188071 5114 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 15:47:23 crc kubenswrapper[5114]: E1210 15:47:23.188331 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-10 15:47:25.188312535 +0000 UTC m=+70.909113722 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 15:47:23 crc kubenswrapper[5114]: I1210 15:47:23.201141 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:23 crc kubenswrapper[5114]: I1210 15:47:23.243008 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b38ac556-07b2-4e25-9595-6adae4fcecb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g9ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g9ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pvhhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:23 crc kubenswrapper[5114]: I1210 15:47:23.279985 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-lg6m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7c683ba-536f-45e5-89b0-fe14989cad13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfxbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lg6m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:23 crc kubenswrapper[5114]: I1210 15:47:23.289400 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48d8f4a9-0b40-486c-ac70-597d1fab05c1-metrics-certs\") pod \"network-metrics-daemon-gjs2g\" (UID: \"48d8f4a9-0b40-486c-ac70-597d1fab05c1\") " pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:47:23 crc kubenswrapper[5114]: I1210 15:47:23.289452 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:47:23 crc kubenswrapper[5114]: I1210 15:47:23.289482 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:47:23 crc kubenswrapper[5114]: E1210 15:47:23.289615 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 15:47:23 crc kubenswrapper[5114]: E1210 15:47:23.289640 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 15:47:23 crc kubenswrapper[5114]: E1210 15:47:23.289649 5114 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:47:23 crc kubenswrapper[5114]: E1210 15:47:23.289695 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-10 15:47:25.289681754 +0000 UTC m=+71.010482931 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:47:23 crc kubenswrapper[5114]: E1210 15:47:23.289986 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 15:47:23 crc kubenswrapper[5114]: E1210 15:47:23.290000 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 15:47:23 crc kubenswrapper[5114]: E1210 15:47:23.290008 5114 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:47:23 crc kubenswrapper[5114]: E1210 15:47:23.290037 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-10 15:47:25.290028983 +0000 UTC m=+71.010830160 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:47:23 crc kubenswrapper[5114]: E1210 15:47:23.289985 5114 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 15:47:23 crc kubenswrapper[5114]: E1210 15:47:23.290062 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48d8f4a9-0b40-486c-ac70-597d1fab05c1-metrics-certs podName:48d8f4a9-0b40-486c-ac70-597d1fab05c1 nodeName:}" failed. No retries permitted until 2025-12-10 15:47:25.290056584 +0000 UTC m=+71.010857751 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/48d8f4a9-0b40-486c-ac70-597d1fab05c1-metrics-certs") pod "network-metrics-daemon-gjs2g" (UID: "48d8f4a9-0b40-486c-ac70-597d1fab05c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 15:47:23 crc kubenswrapper[5114]: I1210 15:47:23.317674 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gjs2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d8f4a9-0b40-486c-ac70-597d1fab05c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtlfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtlfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gjs2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:23 crc kubenswrapper[5114]: I1210 15:47:23.359768 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-49rgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"379e5b28-21b4-4727-a60f-0fad71bf89fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2wz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-49rgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:23 crc kubenswrapper[5114]: I1210 15:47:23.399553 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sg27x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a54715ec-382b-4bb8-bef2-f125ee0bae2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl62h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sg27x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:23 crc kubenswrapper[5114]: I1210 15:47:23.441546 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89d5aad2-7968-4ff9-a9fa-50a133a77df8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkm4v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkm4v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-79jfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:23 crc kubenswrapper[5114]: I1210 15:47:23.479212 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fa5e9e-e71a-458f-88e7-57d296462452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b63509d96fe3793fb1dffe2943da9a38a875dd373fbad85638d39878168af249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://108af1094b4ecac73d954933b32171f5e697d11d78490d831db63f315177de7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://108af1094b4ecac73d954933b32171f5e697d11d78490d831db63f315177de7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:23 crc kubenswrapper[5114]: I1210 15:47:23.522206 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:23 crc kubenswrapper[5114]: I1210 15:47:23.558925 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:23 crc kubenswrapper[5114]: I1210 15:47:23.568210 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:47:23 crc kubenswrapper[5114]: E1210 15:47:23.568351 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjs2g" podUID="48d8f4a9-0b40-486c-ac70-597d1fab05c1" Dec 10 15:47:23 crc kubenswrapper[5114]: I1210 15:47:23.568422 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:47:23 crc kubenswrapper[5114]: E1210 15:47:23.568629 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 10 15:47:23 crc kubenswrapper[5114]: I1210 15:47:23.568657 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:47:23 crc kubenswrapper[5114]: E1210 15:47:23.568742 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 10 15:47:23 crc kubenswrapper[5114]: I1210 15:47:23.602046 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bgfnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:23 crc kubenswrapper[5114]: I1210 15:47:23.639389 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4f07611-baa7-42a7-8607-306ed57fb75c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://800d1520c7107344f8b6d771d0fecfb9ca2644d8efe597cabd69c5de72a571ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ec7a41d072aa02f59def36f4c2802872ef70cbd48046c3e3d6f6ccd6b254c53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c19e0260e8980b12b59f394a8355cee2eee1dc159e14081a0ff23cebdd4e9f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1daca1262ac174a242cff74011ab4da1c00a8caaf4bc44b58af5400ae24d3226\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:23 crc kubenswrapper[5114]: I1210 15:47:23.686907 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14d2b4c9-40f0-4dcb-ad8c-0fe4a5304563\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://85e77e659fccf9ba6e2cc6e99afbafd6be1703e401429ba871243247e0c20a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://447746eb6e190728d80f154f34d6c4c3cd6a364d95c18a4c109e1a2d00fbcf27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://251a7ed18067c8bcbcbcb38700fe905a2a4ebf34fef9f02a6ffc9f78a334bc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43234809c1296bc87d3909492e145b0720e62cf92728f1f24baeac176f8cfc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://4654b1e58183f9508823b58dc37a09482feafd97c887cc56f9d1c793999ee516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://101e3958feb79a37918d043f01289b15aa43519052915151289b2df11a4c798e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://101e3958feb79a37918d043f01289b15aa43519052915151289b2df11a4c798e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://000c0ac3fe264d2edae20d00ae4b904a9c638f104925be4c2999a32625c2384e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://000c0ac3fe264d2edae20d00ae4b904a9c638f104925be4c2999a32625c2384e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://90da8daaae30e60295160aefe8748f6cf28eda2cd17d933569c0320aebc57f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90da8daaae30e60295160aefe8748f6cf28eda2cd17d933569c0320aebc57f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:23 crc kubenswrapper[5114]: I1210 15:47:23.723958 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e331166d-a33f-44c1-9a3e-f43cfee598a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://c9a7475ba48862dfcb11fe65264384be264b4b7acd30761bc650e70dd3a78abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7398b71862f7cfabefc5644c5d6b4924bbde47edadad7f240aa37599d2b3da9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://55ad03eb1a337191c414a5dbd0864a29632396ff234b68505a9a4b65c90d8eb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d79fc0ad78427693b9ef01519261c475c49b29ab8dc64210c09f22886b3dcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1c010c37667d5c045e43048e4405a03d43afd6ebe7774038d9d5a5c5bb8aaf4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T15:47:00Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1210 15:46:59.465586 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 15:46:59.465755 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1210 15:46:59.466800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823188907/tls.crt::/tmp/serving-cert-3823188907/tls.key\\\\\\\"\\\\nI1210 15:47:00.080067 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 15:47:00.081594 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 15:47:00.081609 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 15:47:00.081631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 15:47:00.081635 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 15:47:00.084952 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 15:47:00.084970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 15:47:00.084974 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 15:47:00.084979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 15:47:00.084982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 15:47:00.084984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 15:47:00.084987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 15:47:00.085095 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 15:47:00.088454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:47:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f8dd78b836cacc6ac7bee1a11730500c94192df5a045eb37ae1c137a3cc0ad6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7e3d3b6b0e188659783d2b384d22a05ba8962e4fa49cd4caae040921c9add613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e3d3b6b0e188659783d2b384d22a05ba8962e4fa49cd4caae040921c9add613\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:23 crc kubenswrapper[5114]: I1210 15:47:23.761628 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:23 crc kubenswrapper[5114]: I1210 15:47:23.801240 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:23 crc kubenswrapper[5114]: I1210 15:47:23.840444 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:23 crc kubenswrapper[5114]: I1210 15:47:23.883110 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbl48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3e165c-439d-4282-b1e7-179dca439343\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbl48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:24 crc kubenswrapper[5114]: I1210 15:47:24.567941 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:47:24 crc kubenswrapper[5114]: E1210 15:47:24.568171 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 10 15:47:24 crc kubenswrapper[5114]: I1210 15:47:24.576298 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fa5e9e-e71a-458f-88e7-57d296462452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b63509d96fe3793fb1dffe2943da9a38a875dd373fbad85638d39878168af249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://108af1094b4ecac73d954933b32171f5e697d11d78490d831db63f315177de7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://108af1094b4ecac73d954933b32171f5e697d11d78490d831db63f315177de7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:24 crc kubenswrapper[5114]: I1210 15:47:24.588317 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:24 crc kubenswrapper[5114]: I1210 15:47:24.596517 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:24 crc kubenswrapper[5114]: I1210 15:47:24.612600 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bgfnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:24 crc kubenswrapper[5114]: I1210 15:47:24.625479 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4f07611-baa7-42a7-8607-306ed57fb75c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://800d1520c7107344f8b6d771d0fecfb9ca2644d8efe597cabd69c5de72a571ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ec7a41d072aa02f59def36f4c2802872ef70cbd48046c3e3d6f6ccd6b254c53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c19e0260e8980b12b59f394a8355cee2eee1dc159e14081a0ff23cebdd4e9f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1daca1262ac174a242cff74011ab4da1c00a8caaf4bc44b58af5400ae24d3226\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:24 crc kubenswrapper[5114]: I1210 15:47:24.644533 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14d2b4c9-40f0-4dcb-ad8c-0fe4a5304563\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://85e77e659fccf9ba6e2cc6e99afbafd6be1703e401429ba871243247e0c20a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://447746eb6e190728d80f154f34d6c4c3cd6a364d95c18a4c109e1a2d00fbcf27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://251a7ed18067c8bcbcbcb38700fe905a2a4ebf34fef9f02a6ffc9f78a334bc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43234809c1296bc87d3909492e145b0720e62cf92728f1f24baeac176f8cfc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://4654b1e58183f9508823b58dc37a09482feafd97c887cc56f9d1c793999ee516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://101e3958feb79a37918d043f01289b15aa43519052915151289b2df11a4c798e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://101e3958feb79a37918d043f01289b15aa43519052915151289b2df11a4c798e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://000c0ac3fe264d2edae20d00ae4b904a9c638f104925be4c2999a32625c2384e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://000c0ac3fe264d2edae20d00ae4b904a9c638f104925be4c2999a32625c2384e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://90da8daaae30e60295160aefe8748f6cf28eda2cd17d933569c0320aebc57f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90da8daaae30e60295160aefe8748f6cf28eda2cd17d933569c0320aebc57f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:24 crc kubenswrapper[5114]: I1210 15:47:24.655440 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e331166d-a33f-44c1-9a3e-f43cfee598a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://c9a7475ba48862dfcb11fe65264384be264b4b7acd30761bc650e70dd3a78abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7398b71862f7cfabefc5644c5d6b4924bbde47edadad7f240aa37599d2b3da9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://55ad03eb1a337191c414a5dbd0864a29632396ff234b68505a9a4b65c90d8eb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d79fc0ad78427693b9ef01519261c475c49b29ab8dc64210c09f22886b3dcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1c010c37667d5c045e43048e4405a03d43afd6ebe7774038d9d5a5c5bb8aaf4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T15:47:00Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1210 15:46:59.465586 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 15:46:59.465755 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1210 15:46:59.466800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823188907/tls.crt::/tmp/serving-cert-3823188907/tls.key\\\\\\\"\\\\nI1210 15:47:00.080067 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 15:47:00.081594 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 15:47:00.081609 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 15:47:00.081631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 15:47:00.081635 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 15:47:00.084952 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 15:47:00.084970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 15:47:00.084974 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 15:47:00.084979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 15:47:00.084982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 15:47:00.084984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 15:47:00.084987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 15:47:00.085095 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 15:47:00.088454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:47:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f8dd78b836cacc6ac7bee1a11730500c94192df5a045eb37ae1c137a3cc0ad6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7e3d3b6b0e188659783d2b384d22a05ba8962e4fa49cd4caae040921c9add613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e3d3b6b0e188659783d2b384d22a05ba8962e4fa49cd4caae040921c9add613\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:24 crc kubenswrapper[5114]: I1210 15:47:24.665783 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:24 crc kubenswrapper[5114]: I1210 15:47:24.674534 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:24 crc kubenswrapper[5114]: I1210 15:47:24.687158 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:24 crc kubenswrapper[5114]: I1210 15:47:24.697548 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbl48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3e165c-439d-4282-b1e7-179dca439343\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbl48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:24 crc kubenswrapper[5114]: I1210 15:47:24.705217 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cddacc92-81b7-4948-93c5-5c47e15a9d41\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://82cf7cb8d12a0390623c03e2a919f8f30da8ac13d60bbaaca7bd32778e9816e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8822b68284631476f7526c5a6629b3cbe113320b8716837d4be7ed679ea64b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d65e5ca10eda1aed2b331dff87ea726c9ba50cfbb47bf07c74e0ce4d6d5b99b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bf99e2dd5c01828fb3db803c3d59c571d32f320bec0325579c1510965bea01ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf99e2dd5c01828fb3db803c3d59c571d32f320bec0325579c1510965bea01ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:24 crc kubenswrapper[5114]: I1210 15:47:24.713537 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:24 crc kubenswrapper[5114]: I1210 15:47:24.721547 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b38ac556-07b2-4e25-9595-6adae4fcecb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g9ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g9ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pvhhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:24 crc kubenswrapper[5114]: I1210 15:47:24.730313 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-lg6m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7c683ba-536f-45e5-89b0-fe14989cad13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfxbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lg6m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:24 crc kubenswrapper[5114]: I1210 15:47:24.736636 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gjs2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d8f4a9-0b40-486c-ac70-597d1fab05c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtlfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtlfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gjs2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:24 crc kubenswrapper[5114]: I1210 15:47:24.742318 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-49rgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"379e5b28-21b4-4727-a60f-0fad71bf89fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2wz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-49rgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:24 crc kubenswrapper[5114]: I1210 15:47:24.747479 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sg27x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a54715ec-382b-4bb8-bef2-f125ee0bae2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl62h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sg27x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:24 crc kubenswrapper[5114]: I1210 15:47:24.754156 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89d5aad2-7968-4ff9-a9fa-50a133a77df8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkm4v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkm4v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-79jfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:25 crc kubenswrapper[5114]: I1210 15:47:25.208759 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:47:25 crc kubenswrapper[5114]: E1210 15:47:25.208927 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:47:29.208897959 +0000 UTC m=+74.929699136 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:47:25 crc kubenswrapper[5114]: I1210 15:47:25.208997 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:47:25 crc kubenswrapper[5114]: I1210 15:47:25.209048 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:47:25 crc kubenswrapper[5114]: E1210 15:47:25.209145 5114 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 15:47:25 crc kubenswrapper[5114]: E1210 15:47:25.209174 5114 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 15:47:25 crc kubenswrapper[5114]: E1210 15:47:25.209215 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-10 15:47:29.209200686 +0000 UTC m=+74.930001863 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 15:47:25 crc kubenswrapper[5114]: E1210 15:47:25.209234 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-10 15:47:29.209225417 +0000 UTC m=+74.930026714 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 15:47:25 crc kubenswrapper[5114]: I1210 15:47:25.310544 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48d8f4a9-0b40-486c-ac70-597d1fab05c1-metrics-certs\") pod \"network-metrics-daemon-gjs2g\" (UID: \"48d8f4a9-0b40-486c-ac70-597d1fab05c1\") " pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:47:25 crc kubenswrapper[5114]: I1210 15:47:25.310587 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:47:25 crc kubenswrapper[5114]: I1210 15:47:25.310617 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:47:25 crc kubenswrapper[5114]: E1210 15:47:25.310666 5114 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 15:47:25 crc kubenswrapper[5114]: E1210 15:47:25.310715 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 15:47:25 crc kubenswrapper[5114]: E1210 15:47:25.310723 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48d8f4a9-0b40-486c-ac70-597d1fab05c1-metrics-certs podName:48d8f4a9-0b40-486c-ac70-597d1fab05c1 nodeName:}" failed. No retries permitted until 2025-12-10 15:47:29.310709398 +0000 UTC m=+75.031510575 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/48d8f4a9-0b40-486c-ac70-597d1fab05c1-metrics-certs") pod "network-metrics-daemon-gjs2g" (UID: "48d8f4a9-0b40-486c-ac70-597d1fab05c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 15:47:25 crc kubenswrapper[5114]: E1210 15:47:25.310726 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 15:47:25 crc kubenswrapper[5114]: E1210 15:47:25.310737 5114 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:47:25 crc kubenswrapper[5114]: E1210 15:47:25.310767 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-10 15:47:29.310758069 +0000 UTC m=+75.031559246 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:47:25 crc kubenswrapper[5114]: E1210 15:47:25.310809 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 15:47:25 crc kubenswrapper[5114]: E1210 15:47:25.310848 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 15:47:25 crc kubenswrapper[5114]: E1210 15:47:25.310861 5114 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:47:25 crc kubenswrapper[5114]: E1210 15:47:25.310941 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-10 15:47:29.310920683 +0000 UTC m=+75.031721930 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:47:25 crc kubenswrapper[5114]: I1210 15:47:25.568061 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:47:25 crc kubenswrapper[5114]: E1210 15:47:25.568300 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjs2g" podUID="48d8f4a9-0b40-486c-ac70-597d1fab05c1" Dec 10 15:47:25 crc kubenswrapper[5114]: I1210 15:47:25.568332 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:47:25 crc kubenswrapper[5114]: I1210 15:47:25.568393 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:47:25 crc kubenswrapper[5114]: E1210 15:47:25.568509 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 10 15:47:25 crc kubenswrapper[5114]: E1210 15:47:25.568706 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.474374 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.475899 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.476204 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.476313 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.476523 5114 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.485386 5114 kubelet_node_status.go:127] "Node was previously registered" node="crc" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.485684 5114 kubelet_node_status.go:81] "Successfully registered node" node="crc" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.486636 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.486728 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.486756 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.486779 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.486797 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:26Z","lastTransitionTime":"2025-12-10T15:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:26 crc kubenswrapper[5114]: E1210 15:47:26.500808 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1983090-c631-42b8-889c-661e5120de50\\\",\\\"systemUUID\\\":\\\"ea4de44f-fffe-48de-b641-4c0ea71eb3ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.504639 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.504735 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.504763 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.504819 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.504853 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:26Z","lastTransitionTime":"2025-12-10T15:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:26 crc kubenswrapper[5114]: E1210 15:47:26.514698 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1983090-c631-42b8-889c-661e5120de50\\\",\\\"systemUUID\\\":\\\"ea4de44f-fffe-48de-b641-4c0ea71eb3ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.518417 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.518465 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.518481 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.518499 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.518512 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:26Z","lastTransitionTime":"2025-12-10T15:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:26 crc kubenswrapper[5114]: E1210 15:47:26.526345 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1983090-c631-42b8-889c-661e5120de50\\\",\\\"systemUUID\\\":\\\"ea4de44f-fffe-48de-b641-4c0ea71eb3ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.529173 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.529243 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.529266 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.529333 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.529354 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:26Z","lastTransitionTime":"2025-12-10T15:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:26 crc kubenswrapper[5114]: E1210 15:47:26.539340 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1983090-c631-42b8-889c-661e5120de50\\\",\\\"systemUUID\\\":\\\"ea4de44f-fffe-48de-b641-4c0ea71eb3ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.542922 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.543000 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.543028 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.543059 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.543084 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:26Z","lastTransitionTime":"2025-12-10T15:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:26 crc kubenswrapper[5114]: E1210 15:47:26.557089 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1983090-c631-42b8-889c-661e5120de50\\\",\\\"systemUUID\\\":\\\"ea4de44f-fffe-48de-b641-4c0ea71eb3ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:26 crc kubenswrapper[5114]: E1210 15:47:26.557264 5114 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.558485 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.558570 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.558596 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.558703 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.558861 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:26Z","lastTransitionTime":"2025-12-10T15:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.571952 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:47:26 crc kubenswrapper[5114]: E1210 15:47:26.572056 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.661357 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.661398 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.661407 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.661420 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.661429 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:26Z","lastTransitionTime":"2025-12-10T15:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.763567 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.763612 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.763622 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.763636 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.763648 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:26Z","lastTransitionTime":"2025-12-10T15:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.865594 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.865667 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.865691 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.865720 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.865742 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:26Z","lastTransitionTime":"2025-12-10T15:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.967483 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.967513 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.967524 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.967538 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:26 crc kubenswrapper[5114]: I1210 15:47:26.967548 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:26Z","lastTransitionTime":"2025-12-10T15:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.069584 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.069784 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.069828 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.069861 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.069883 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:27Z","lastTransitionTime":"2025-12-10T15:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.171919 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.171982 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.172002 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.172027 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.172045 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:27Z","lastTransitionTime":"2025-12-10T15:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.274226 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.274294 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.274309 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.274331 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.274342 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:27Z","lastTransitionTime":"2025-12-10T15:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.376233 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.376309 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.376322 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.376340 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.376353 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:27Z","lastTransitionTime":"2025-12-10T15:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.478151 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.478233 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.478260 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.478338 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.478365 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:27Z","lastTransitionTime":"2025-12-10T15:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.567931 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.567928 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.567984 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:47:27 crc kubenswrapper[5114]: E1210 15:47:27.568850 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 10 15:47:27 crc kubenswrapper[5114]: E1210 15:47:27.568857 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjs2g" podUID="48d8f4a9-0b40-486c-ac70-597d1fab05c1" Dec 10 15:47:27 crc kubenswrapper[5114]: E1210 15:47:27.568903 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.580865 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.580940 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.580954 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.581000 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.581016 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:27Z","lastTransitionTime":"2025-12-10T15:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.683655 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.683963 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.684066 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.684165 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.684261 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:27Z","lastTransitionTime":"2025-12-10T15:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.786562 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.786614 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.786624 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.786639 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.786649 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:27Z","lastTransitionTime":"2025-12-10T15:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.888711 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.888755 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.888766 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.888781 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.888792 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:27Z","lastTransitionTime":"2025-12-10T15:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.990739 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.990796 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.990809 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.990829 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:27 crc kubenswrapper[5114]: I1210 15:47:27.990841 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:27Z","lastTransitionTime":"2025-12-10T15:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.092784 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.092819 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.092830 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.092862 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.092872 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:28Z","lastTransitionTime":"2025-12-10T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.195567 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.195625 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.195635 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.195689 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.195700 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:28Z","lastTransitionTime":"2025-12-10T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.297216 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.297255 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.297267 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.297392 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.297423 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:28Z","lastTransitionTime":"2025-12-10T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.399442 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.399489 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.399502 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.399517 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.399528 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:28Z","lastTransitionTime":"2025-12-10T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.502152 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.502207 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.502237 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.502255 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.502265 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:28Z","lastTransitionTime":"2025-12-10T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.568049 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:47:28 crc kubenswrapper[5114]: E1210 15:47:28.568445 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.604497 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.604572 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.604586 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.604602 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.604614 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:28Z","lastTransitionTime":"2025-12-10T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.706705 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.706759 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.706772 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.706789 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.706802 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:28Z","lastTransitionTime":"2025-12-10T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.808964 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.809222 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.809338 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.809543 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.809623 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:28Z","lastTransitionTime":"2025-12-10T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.912040 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.912107 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.912121 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.912139 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:28 crc kubenswrapper[5114]: I1210 15:47:28.912160 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:28Z","lastTransitionTime":"2025-12-10T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.014289 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.014332 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.014343 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.014357 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.014366 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:29Z","lastTransitionTime":"2025-12-10T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.116681 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.116738 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.116748 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.116761 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.116769 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:29Z","lastTransitionTime":"2025-12-10T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.218022 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.218081 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.218091 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.218104 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.218112 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:29Z","lastTransitionTime":"2025-12-10T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.254887 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:47:29 crc kubenswrapper[5114]: E1210 15:47:29.255026 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:47:37.254995508 +0000 UTC m=+82.975796695 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.255416 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:47:29 crc kubenswrapper[5114]: E1210 15:47:29.255560 5114 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 15:47:29 crc kubenswrapper[5114]: E1210 15:47:29.255791 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-10 15:47:37.255772698 +0000 UTC m=+82.976573895 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.255720 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:47:29 crc kubenswrapper[5114]: E1210 15:47:29.256096 5114 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 15:47:29 crc kubenswrapper[5114]: E1210 15:47:29.256265 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-10 15:47:37.25622722 +0000 UTC m=+82.977028437 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.320726 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.320801 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.320830 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.320862 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.320885 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:29Z","lastTransitionTime":"2025-12-10T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.357680 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48d8f4a9-0b40-486c-ac70-597d1fab05c1-metrics-certs\") pod \"network-metrics-daemon-gjs2g\" (UID: \"48d8f4a9-0b40-486c-ac70-597d1fab05c1\") " pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.358198 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.358530 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:47:29 crc kubenswrapper[5114]: E1210 15:47:29.357858 5114 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 15:47:29 crc kubenswrapper[5114]: E1210 15:47:29.358978 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48d8f4a9-0b40-486c-ac70-597d1fab05c1-metrics-certs podName:48d8f4a9-0b40-486c-ac70-597d1fab05c1 nodeName:}" failed. No retries permitted until 2025-12-10 15:47:37.358951243 +0000 UTC m=+83.079752460 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/48d8f4a9-0b40-486c-ac70-597d1fab05c1-metrics-certs") pod "network-metrics-daemon-gjs2g" (UID: "48d8f4a9-0b40-486c-ac70-597d1fab05c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 15:47:29 crc kubenswrapper[5114]: E1210 15:47:29.358307 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 15:47:29 crc kubenswrapper[5114]: E1210 15:47:29.358599 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 15:47:29 crc kubenswrapper[5114]: E1210 15:47:29.359494 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 15:47:29 crc kubenswrapper[5114]: E1210 15:47:29.359531 5114 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:47:29 crc kubenswrapper[5114]: E1210 15:47:29.359634 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-10 15:47:37.359604489 +0000 UTC m=+83.080405676 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:47:29 crc kubenswrapper[5114]: E1210 15:47:29.360115 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 15:47:29 crc kubenswrapper[5114]: E1210 15:47:29.360434 5114 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:47:29 crc kubenswrapper[5114]: E1210 15:47:29.360782 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-10 15:47:37.360758879 +0000 UTC m=+83.081560096 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.422790 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.422832 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.422844 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.422863 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.422875 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:29Z","lastTransitionTime":"2025-12-10T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.524876 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.524961 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.524986 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.525016 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.525040 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:29Z","lastTransitionTime":"2025-12-10T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.568520 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:47:29 crc kubenswrapper[5114]: E1210 15:47:29.568701 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.568723 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:47:29 crc kubenswrapper[5114]: E1210 15:47:29.568858 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.569265 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:47:29 crc kubenswrapper[5114]: E1210 15:47:29.569627 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjs2g" podUID="48d8f4a9-0b40-486c-ac70-597d1fab05c1" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.627780 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.627838 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.627857 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.627881 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.627899 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:29Z","lastTransitionTime":"2025-12-10T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.730459 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.730791 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.730975 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.731133 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.731252 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:29Z","lastTransitionTime":"2025-12-10T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.833104 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.833177 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.833189 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.833204 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.833216 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:29Z","lastTransitionTime":"2025-12-10T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.935130 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.935175 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.935198 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.935215 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:29 crc kubenswrapper[5114]: I1210 15:47:29.935227 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:29Z","lastTransitionTime":"2025-12-10T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.037771 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.037817 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.037826 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.037839 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.037850 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:30Z","lastTransitionTime":"2025-12-10T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.140113 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.140397 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.140464 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.140537 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.140602 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:30Z","lastTransitionTime":"2025-12-10T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.243662 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.243741 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.243787 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.243810 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.243828 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:30Z","lastTransitionTime":"2025-12-10T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.346547 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.346601 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.346618 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.346638 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.346656 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:30Z","lastTransitionTime":"2025-12-10T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.448765 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.448825 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.448843 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.448868 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.448885 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:30Z","lastTransitionTime":"2025-12-10T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.550926 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.551414 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.551487 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.551550 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.551695 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:30Z","lastTransitionTime":"2025-12-10T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.568500 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:47:30 crc kubenswrapper[5114]: E1210 15:47:30.568871 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.654390 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.654492 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.654514 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.654540 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.654566 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:30Z","lastTransitionTime":"2025-12-10T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.757156 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.757255 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.757317 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.757345 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.757364 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:30Z","lastTransitionTime":"2025-12-10T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.860198 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.860241 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.860253 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.860269 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.860301 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:30Z","lastTransitionTime":"2025-12-10T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.962665 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.962922 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.962989 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.963053 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:30 crc kubenswrapper[5114]: I1210 15:47:30.963138 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:30Z","lastTransitionTime":"2025-12-10T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.066361 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.066414 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.066430 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.066448 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.066461 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:31Z","lastTransitionTime":"2025-12-10T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.168516 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.168563 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.168576 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.168594 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.168608 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:31Z","lastTransitionTime":"2025-12-10T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.271104 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.271143 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.271157 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.271172 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.271182 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:31Z","lastTransitionTime":"2025-12-10T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.373620 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.373693 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.373718 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.373750 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.373773 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:31Z","lastTransitionTime":"2025-12-10T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.477230 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.477355 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.477377 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.477403 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.477423 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:31Z","lastTransitionTime":"2025-12-10T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.568379 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.568415 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.568383 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:47:31 crc kubenswrapper[5114]: E1210 15:47:31.568505 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 10 15:47:31 crc kubenswrapper[5114]: E1210 15:47:31.568681 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjs2g" podUID="48d8f4a9-0b40-486c-ac70-597d1fab05c1" Dec 10 15:47:31 crc kubenswrapper[5114]: E1210 15:47:31.568989 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.579748 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.579899 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.579915 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.579931 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.579941 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:31Z","lastTransitionTime":"2025-12-10T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.681714 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.681807 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.681836 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.681861 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.681881 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:31Z","lastTransitionTime":"2025-12-10T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.784211 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.784302 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.784323 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.784344 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.784361 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:31Z","lastTransitionTime":"2025-12-10T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.886097 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.886179 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.886196 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.886225 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.886238 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:31Z","lastTransitionTime":"2025-12-10T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.988369 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.988447 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.988471 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.988502 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:31 crc kubenswrapper[5114]: I1210 15:47:31.988540 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:31Z","lastTransitionTime":"2025-12-10T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.091484 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.091549 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.091568 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.091591 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.091608 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:32Z","lastTransitionTime":"2025-12-10T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.193019 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.193070 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.193080 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.193095 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.193106 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:32Z","lastTransitionTime":"2025-12-10T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.295214 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.295303 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.295326 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.295363 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.295382 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:32Z","lastTransitionTime":"2025-12-10T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.398362 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.398791 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.398982 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.399147 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.399366 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:32Z","lastTransitionTime":"2025-12-10T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.502189 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.502257 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.502331 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.502380 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.502406 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:32Z","lastTransitionTime":"2025-12-10T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.568356 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:47:32 crc kubenswrapper[5114]: E1210 15:47:32.569265 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.606119 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.606176 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.606190 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.606210 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.606226 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:32Z","lastTransitionTime":"2025-12-10T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.708239 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.708315 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.708329 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.708346 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.708358 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:32Z","lastTransitionTime":"2025-12-10T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.810555 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.810596 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.810607 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.810624 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.810634 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:32Z","lastTransitionTime":"2025-12-10T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.855269 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.870351 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4f07611-baa7-42a7-8607-306ed57fb75c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://800d1520c7107344f8b6d771d0fecfb9ca2644d8efe597cabd69c5de72a571ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ec7a41d072aa02f59def36f4c2802872ef70cbd48046c3e3d6f6ccd6b254c53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c19e0260e8980b12b59f394a8355cee2eee1dc159e14081a0ff23cebdd4e9f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1daca1262ac174a242cff74011ab4da1c00a8caaf4bc44b58af5400ae24d3226\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.898672 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14d2b4c9-40f0-4dcb-ad8c-0fe4a5304563\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://85e77e659fccf9ba6e2cc6e99afbafd6be1703e401429ba871243247e0c20a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://447746eb6e190728d80f154f34d6c4c3cd6a364d95c18a4c109e1a2d00fbcf27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://251a7ed18067c8bcbcbcb38700fe905a2a4ebf34fef9f02a6ffc9f78a334bc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43234809c1296bc87d3909492e145b0720e62cf92728f1f24baeac176f8cfc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://4654b1e58183f9508823b58dc37a09482feafd97c887cc56f9d1c793999ee516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://101e3958feb79a37918d043f01289b15aa43519052915151289b2df11a4c798e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://101e3958feb79a37918d043f01289b15aa43519052915151289b2df11a4c798e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://000c0ac3fe264d2edae20d00ae4b904a9c638f104925be4c2999a32625c2384e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://000c0ac3fe264d2edae20d00ae4b904a9c638f104925be4c2999a32625c2384e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://90da8daaae30e60295160aefe8748f6cf28eda2cd17d933569c0320aebc57f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90da8daaae30e60295160aefe8748f6cf28eda2cd17d933569c0320aebc57f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.910899 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e331166d-a33f-44c1-9a3e-f43cfee598a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://c9a7475ba48862dfcb11fe65264384be264b4b7acd30761bc650e70dd3a78abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7398b71862f7cfabefc5644c5d6b4924bbde47edadad7f240aa37599d2b3da9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://55ad03eb1a337191c414a5dbd0864a29632396ff234b68505a9a4b65c90d8eb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d79fc0ad78427693b9ef01519261c475c49b29ab8dc64210c09f22886b3dcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1c010c37667d5c045e43048e4405a03d43afd6ebe7774038d9d5a5c5bb8aaf4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T15:47:00Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1210 15:46:59.465586 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 15:46:59.465755 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1210 15:46:59.466800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823188907/tls.crt::/tmp/serving-cert-3823188907/tls.key\\\\\\\"\\\\nI1210 15:47:00.080067 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 15:47:00.081594 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 15:47:00.081609 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 15:47:00.081631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 15:47:00.081635 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 15:47:00.084952 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 15:47:00.084970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 15:47:00.084974 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 15:47:00.084979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 15:47:00.084982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 15:47:00.084984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 15:47:00.084987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 15:47:00.085095 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 15:47:00.088454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:47:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f8dd78b836cacc6ac7bee1a11730500c94192df5a045eb37ae1c137a3cc0ad6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7e3d3b6b0e188659783d2b384d22a05ba8962e4fa49cd4caae040921c9add613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e3d3b6b0e188659783d2b384d22a05ba8962e4fa49cd4caae040921c9add613\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.912082 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.912192 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.912251 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.912516 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.912635 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:32Z","lastTransitionTime":"2025-12-10T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.923445 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.938135 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.950682 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.965023 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbl48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3e165c-439d-4282-b1e7-179dca439343\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbl48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.975462 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cddacc92-81b7-4948-93c5-5c47e15a9d41\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://82cf7cb8d12a0390623c03e2a919f8f30da8ac13d60bbaaca7bd32778e9816e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8822b68284631476f7526c5a6629b3cbe113320b8716837d4be7ed679ea64b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d65e5ca10eda1aed2b331dff87ea726c9ba50cfbb47bf07c74e0ce4d6d5b99b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bf99e2dd5c01828fb3db803c3d59c571d32f320bec0325579c1510965bea01ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf99e2dd5c01828fb3db803c3d59c571d32f320bec0325579c1510965bea01ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.986832 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:32 crc kubenswrapper[5114]: I1210 15:47:32.998717 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b38ac556-07b2-4e25-9595-6adae4fcecb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g9ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g9ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pvhhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.011041 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-lg6m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7c683ba-536f-45e5-89b0-fe14989cad13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfxbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lg6m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.014486 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.014520 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.014529 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.014543 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.014553 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:33Z","lastTransitionTime":"2025-12-10T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.021688 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gjs2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d8f4a9-0b40-486c-ac70-597d1fab05c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtlfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtlfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gjs2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.029653 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-49rgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"379e5b28-21b4-4727-a60f-0fad71bf89fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2wz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-49rgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.037597 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sg27x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a54715ec-382b-4bb8-bef2-f125ee0bae2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl62h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sg27x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.044138 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89d5aad2-7968-4ff9-a9fa-50a133a77df8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkm4v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkm4v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-79jfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.049775 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fa5e9e-e71a-458f-88e7-57d296462452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b63509d96fe3793fb1dffe2943da9a38a875dd373fbad85638d39878168af249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://108af1094b4ecac73d954933b32171f5e697d11d78490d831db63f315177de7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://108af1094b4ecac73d954933b32171f5e697d11d78490d831db63f315177de7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.056844 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.064051 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.080998 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bgfnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.117005 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.117051 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.117063 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.117076 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.117086 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:33Z","lastTransitionTime":"2025-12-10T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.219777 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.219843 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.219861 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.219885 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.219903 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:33Z","lastTransitionTime":"2025-12-10T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.323151 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.323309 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.323341 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.323376 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.323398 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:33Z","lastTransitionTime":"2025-12-10T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.425305 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.425387 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.425412 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.425446 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.425472 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:33Z","lastTransitionTime":"2025-12-10T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.527877 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.527926 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.527935 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.527948 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.527957 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:33Z","lastTransitionTime":"2025-12-10T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.568574 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:47:33 crc kubenswrapper[5114]: E1210 15:47:33.568825 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.568883 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:47:33 crc kubenswrapper[5114]: E1210 15:47:33.569573 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjs2g" podUID="48d8f4a9-0b40-486c-ac70-597d1fab05c1" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.569640 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:47:33 crc kubenswrapper[5114]: E1210 15:47:33.569851 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 10 15:47:33 crc kubenswrapper[5114]: E1210 15:47:33.575365 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:33 crc kubenswrapper[5114]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 10 15:47:33 crc kubenswrapper[5114]: apiVersion: v1 Dec 10 15:47:33 crc kubenswrapper[5114]: clusters: Dec 10 15:47:33 crc kubenswrapper[5114]: - cluster: Dec 10 15:47:33 crc kubenswrapper[5114]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 10 15:47:33 crc kubenswrapper[5114]: server: https://api-int.crc.testing:6443 Dec 10 15:47:33 crc kubenswrapper[5114]: name: default-cluster Dec 10 15:47:33 crc kubenswrapper[5114]: contexts: Dec 10 15:47:33 crc kubenswrapper[5114]: - context: Dec 10 15:47:33 crc kubenswrapper[5114]: cluster: default-cluster Dec 10 15:47:33 crc kubenswrapper[5114]: namespace: default Dec 10 15:47:33 crc kubenswrapper[5114]: user: default-auth Dec 10 15:47:33 crc kubenswrapper[5114]: name: default-context Dec 10 15:47:33 crc kubenswrapper[5114]: current-context: default-context Dec 10 15:47:33 crc kubenswrapper[5114]: kind: Config Dec 10 15:47:33 crc kubenswrapper[5114]: preferences: {} Dec 10 15:47:33 crc kubenswrapper[5114]: users: Dec 10 15:47:33 crc kubenswrapper[5114]: - name: default-auth Dec 10 15:47:33 crc kubenswrapper[5114]: user: Dec 10 15:47:33 crc kubenswrapper[5114]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 10 15:47:33 crc kubenswrapper[5114]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 10 15:47:33 crc kubenswrapper[5114]: EOF Dec 10 15:47:33 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xgklm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-bgfnl_openshift-ovn-kubernetes(5bef68a8-63de-4992-87b6-3dc6c70f5a1d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:33 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:33 crc kubenswrapper[5114]: E1210 15:47:33.576762 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.629824 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.629907 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.629927 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.629951 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.629968 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:33Z","lastTransitionTime":"2025-12-10T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.732179 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.732334 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.732372 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.732404 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.732427 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:33Z","lastTransitionTime":"2025-12-10T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.834257 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.834334 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.834347 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.834366 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.834379 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:33Z","lastTransitionTime":"2025-12-10T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.936795 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.936868 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.936883 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.936901 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:33 crc kubenswrapper[5114]: I1210 15:47:33.936915 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:33Z","lastTransitionTime":"2025-12-10T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.039854 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.039901 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.039912 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.039927 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.039953 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:34Z","lastTransitionTime":"2025-12-10T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.141987 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.142050 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.142064 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.142082 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.142095 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:34Z","lastTransitionTime":"2025-12-10T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.243947 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.244003 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.244016 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.244032 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.244066 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:34Z","lastTransitionTime":"2025-12-10T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.346785 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.346837 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.346850 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.346866 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.346878 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:34Z","lastTransitionTime":"2025-12-10T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.449214 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.449356 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.449453 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.449967 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.450026 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:34Z","lastTransitionTime":"2025-12-10T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.552234 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.552301 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.552319 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.552335 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.552346 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:34Z","lastTransitionTime":"2025-12-10T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.568153 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:47:34 crc kubenswrapper[5114]: E1210 15:47:34.568391 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 10 15:47:34 crc kubenswrapper[5114]: E1210 15:47:34.570179 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:34 crc kubenswrapper[5114]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Dec 10 15:47:34 crc kubenswrapper[5114]: set -euo pipefail Dec 10 15:47:34 crc kubenswrapper[5114]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Dec 10 15:47:34 crc kubenswrapper[5114]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Dec 10 15:47:34 crc kubenswrapper[5114]: # As the secret mount is optional we must wait for the files to be present. Dec 10 15:47:34 crc kubenswrapper[5114]: # The service is created in monitor.yaml and this is created in sdn.yaml. Dec 10 15:47:34 crc kubenswrapper[5114]: TS=$(date +%s) Dec 10 15:47:34 crc kubenswrapper[5114]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Dec 10 15:47:34 crc kubenswrapper[5114]: HAS_LOGGED_INFO=0 Dec 10 15:47:34 crc kubenswrapper[5114]: Dec 10 15:47:34 crc kubenswrapper[5114]: log_missing_certs(){ Dec 10 15:47:34 crc kubenswrapper[5114]: CUR_TS=$(date +%s) Dec 10 15:47:34 crc kubenswrapper[5114]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Dec 10 15:47:34 crc kubenswrapper[5114]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Dec 10 15:47:34 crc kubenswrapper[5114]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Dec 10 15:47:34 crc kubenswrapper[5114]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Dec 10 15:47:34 crc kubenswrapper[5114]: HAS_LOGGED_INFO=1 Dec 10 15:47:34 crc kubenswrapper[5114]: fi Dec 10 15:47:34 crc kubenswrapper[5114]: } Dec 10 15:47:34 crc kubenswrapper[5114]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Dec 10 15:47:34 crc kubenswrapper[5114]: log_missing_certs Dec 10 15:47:34 crc kubenswrapper[5114]: sleep 5 Dec 10 15:47:34 crc kubenswrapper[5114]: done Dec 10 15:47:34 crc kubenswrapper[5114]: Dec 10 15:47:34 crc kubenswrapper[5114]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Dec 10 15:47:34 crc kubenswrapper[5114]: exec /usr/bin/kube-rbac-proxy \ Dec 10 15:47:34 crc kubenswrapper[5114]: --logtostderr \ Dec 10 15:47:34 crc kubenswrapper[5114]: --secure-listen-address=:9108 \ Dec 10 15:47:34 crc kubenswrapper[5114]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Dec 10 15:47:34 crc kubenswrapper[5114]: --upstream=http://127.0.0.1:29108/ \ Dec 10 15:47:34 crc kubenswrapper[5114]: --tls-private-key-file=${TLS_PK} \ Dec 10 15:47:34 crc kubenswrapper[5114]: --tls-cert-file=${TLS_CERT} Dec 10 15:47:34 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zkm4v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-79jfj_openshift-ovn-kubernetes(89d5aad2-7968-4ff9-a9fa-50a133a77df8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:34 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:34 crc kubenswrapper[5114]: E1210 15:47:34.573019 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:34 crc kubenswrapper[5114]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 10 15:47:34 crc kubenswrapper[5114]: if [[ -f "/env/_master" ]]; then Dec 10 15:47:34 crc kubenswrapper[5114]: set -o allexport Dec 10 15:47:34 crc kubenswrapper[5114]: source "/env/_master" Dec 10 15:47:34 crc kubenswrapper[5114]: set +o allexport Dec 10 15:47:34 crc kubenswrapper[5114]: fi Dec 10 15:47:34 crc kubenswrapper[5114]: Dec 10 15:47:34 crc kubenswrapper[5114]: ovn_v4_join_subnet_opt= Dec 10 15:47:34 crc kubenswrapper[5114]: if [[ "" != "" ]]; then Dec 10 15:47:34 crc kubenswrapper[5114]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Dec 10 15:47:34 crc kubenswrapper[5114]: fi Dec 10 15:47:34 crc kubenswrapper[5114]: ovn_v6_join_subnet_opt= Dec 10 15:47:34 crc kubenswrapper[5114]: if [[ "" != "" ]]; then Dec 10 15:47:34 crc kubenswrapper[5114]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Dec 10 15:47:34 crc kubenswrapper[5114]: fi Dec 10 15:47:34 crc kubenswrapper[5114]: Dec 10 15:47:34 crc kubenswrapper[5114]: ovn_v4_transit_switch_subnet_opt= Dec 10 15:47:34 crc kubenswrapper[5114]: if [[ "" != "" ]]; then Dec 10 15:47:34 crc kubenswrapper[5114]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Dec 10 15:47:34 crc kubenswrapper[5114]: fi Dec 10 15:47:34 crc kubenswrapper[5114]: ovn_v6_transit_switch_subnet_opt= Dec 10 15:47:34 crc kubenswrapper[5114]: if [[ "" != "" ]]; then Dec 10 15:47:34 crc kubenswrapper[5114]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Dec 10 15:47:34 crc kubenswrapper[5114]: fi Dec 10 15:47:34 crc kubenswrapper[5114]: Dec 10 15:47:34 crc kubenswrapper[5114]: dns_name_resolver_enabled_flag= Dec 10 15:47:34 crc kubenswrapper[5114]: if [[ "false" == "true" ]]; then Dec 10 15:47:34 crc kubenswrapper[5114]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Dec 10 15:47:34 crc kubenswrapper[5114]: fi Dec 10 15:47:34 crc kubenswrapper[5114]: Dec 10 15:47:34 crc kubenswrapper[5114]: persistent_ips_enabled_flag="--enable-persistent-ips" Dec 10 15:47:34 crc kubenswrapper[5114]: Dec 10 15:47:34 crc kubenswrapper[5114]: # This is needed so that converting clusters from GA to TP Dec 10 15:47:34 crc kubenswrapper[5114]: # will rollout control plane pods as well Dec 10 15:47:34 crc kubenswrapper[5114]: network_segmentation_enabled_flag= Dec 10 15:47:34 crc kubenswrapper[5114]: multi_network_enabled_flag= Dec 10 15:47:34 crc kubenswrapper[5114]: if [[ "true" == "true" ]]; then Dec 10 15:47:34 crc kubenswrapper[5114]: multi_network_enabled_flag="--enable-multi-network" Dec 10 15:47:34 crc kubenswrapper[5114]: fi Dec 10 15:47:34 crc kubenswrapper[5114]: if [[ "true" == "true" ]]; then Dec 10 15:47:34 crc kubenswrapper[5114]: if [[ "true" != "true" ]]; then Dec 10 15:47:34 crc kubenswrapper[5114]: multi_network_enabled_flag="--enable-multi-network" Dec 10 15:47:34 crc kubenswrapper[5114]: fi Dec 10 15:47:34 crc kubenswrapper[5114]: network_segmentation_enabled_flag="--enable-network-segmentation" Dec 10 15:47:34 crc kubenswrapper[5114]: fi Dec 10 15:47:34 crc kubenswrapper[5114]: Dec 10 15:47:34 crc kubenswrapper[5114]: route_advertisements_enable_flag= Dec 10 15:47:34 crc kubenswrapper[5114]: if [[ "false" == "true" ]]; then Dec 10 15:47:34 crc kubenswrapper[5114]: route_advertisements_enable_flag="--enable-route-advertisements" Dec 10 15:47:34 crc kubenswrapper[5114]: fi Dec 10 15:47:34 crc kubenswrapper[5114]: Dec 10 15:47:34 crc kubenswrapper[5114]: preconfigured_udn_addresses_enable_flag= Dec 10 15:47:34 crc kubenswrapper[5114]: if [[ "false" == "true" ]]; then Dec 10 15:47:34 crc kubenswrapper[5114]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Dec 10 15:47:34 crc kubenswrapper[5114]: fi Dec 10 15:47:34 crc kubenswrapper[5114]: Dec 10 15:47:34 crc kubenswrapper[5114]: # Enable multi-network policy if configured (control-plane always full mode) Dec 10 15:47:34 crc kubenswrapper[5114]: multi_network_policy_enabled_flag= Dec 10 15:47:34 crc kubenswrapper[5114]: if [[ "false" == "true" ]]; then Dec 10 15:47:34 crc kubenswrapper[5114]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Dec 10 15:47:34 crc kubenswrapper[5114]: fi Dec 10 15:47:34 crc kubenswrapper[5114]: Dec 10 15:47:34 crc kubenswrapper[5114]: # Enable admin network policy if configured (control-plane always full mode) Dec 10 15:47:34 crc kubenswrapper[5114]: admin_network_policy_enabled_flag= Dec 10 15:47:34 crc kubenswrapper[5114]: if [[ "true" == "true" ]]; then Dec 10 15:47:34 crc kubenswrapper[5114]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Dec 10 15:47:34 crc kubenswrapper[5114]: fi Dec 10 15:47:34 crc kubenswrapper[5114]: Dec 10 15:47:34 crc kubenswrapper[5114]: if [ "shared" == "shared" ]; then Dec 10 15:47:34 crc kubenswrapper[5114]: gateway_mode_flags="--gateway-mode shared" Dec 10 15:47:34 crc kubenswrapper[5114]: elif [ "shared" == "local" ]; then Dec 10 15:47:34 crc kubenswrapper[5114]: gateway_mode_flags="--gateway-mode local" Dec 10 15:47:34 crc kubenswrapper[5114]: else Dec 10 15:47:34 crc kubenswrapper[5114]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Dec 10 15:47:34 crc kubenswrapper[5114]: exit 1 Dec 10 15:47:34 crc kubenswrapper[5114]: fi Dec 10 15:47:34 crc kubenswrapper[5114]: Dec 10 15:47:34 crc kubenswrapper[5114]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Dec 10 15:47:34 crc kubenswrapper[5114]: exec /usr/bin/ovnkube \ Dec 10 15:47:34 crc kubenswrapper[5114]: --enable-interconnect \ Dec 10 15:47:34 crc kubenswrapper[5114]: --init-cluster-manager "${K8S_NODE}" \ Dec 10 15:47:34 crc kubenswrapper[5114]: --config-file=/run/ovnkube-config/ovnkube.conf \ Dec 10 15:47:34 crc kubenswrapper[5114]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Dec 10 15:47:34 crc kubenswrapper[5114]: --metrics-bind-address "127.0.0.1:29108" \ Dec 10 15:47:34 crc kubenswrapper[5114]: --metrics-enable-pprof \ Dec 10 15:47:34 crc kubenswrapper[5114]: --metrics-enable-config-duration \ Dec 10 15:47:34 crc kubenswrapper[5114]: ${ovn_v4_join_subnet_opt} \ Dec 10 15:47:34 crc kubenswrapper[5114]: ${ovn_v6_join_subnet_opt} \ Dec 10 15:47:34 crc kubenswrapper[5114]: ${ovn_v4_transit_switch_subnet_opt} \ Dec 10 15:47:34 crc kubenswrapper[5114]: ${ovn_v6_transit_switch_subnet_opt} \ Dec 10 15:47:34 crc kubenswrapper[5114]: ${dns_name_resolver_enabled_flag} \ Dec 10 15:47:34 crc kubenswrapper[5114]: ${persistent_ips_enabled_flag} \ Dec 10 15:47:34 crc kubenswrapper[5114]: ${multi_network_enabled_flag} \ Dec 10 15:47:34 crc kubenswrapper[5114]: ${network_segmentation_enabled_flag} \ Dec 10 15:47:34 crc kubenswrapper[5114]: ${gateway_mode_flags} \ Dec 10 15:47:34 crc kubenswrapper[5114]: ${route_advertisements_enable_flag} \ Dec 10 15:47:34 crc kubenswrapper[5114]: ${preconfigured_udn_addresses_enable_flag} \ Dec 10 15:47:34 crc kubenswrapper[5114]: --enable-egress-ip=true \ Dec 10 15:47:34 crc kubenswrapper[5114]: --enable-egress-firewall=true \ Dec 10 15:47:34 crc kubenswrapper[5114]: --enable-egress-qos=true \ Dec 10 15:47:34 crc kubenswrapper[5114]: --enable-egress-service=true \ Dec 10 15:47:34 crc kubenswrapper[5114]: --enable-multicast \ Dec 10 15:47:34 crc kubenswrapper[5114]: --enable-multi-external-gateway=true \ Dec 10 15:47:34 crc kubenswrapper[5114]: ${multi_network_policy_enabled_flag} \ Dec 10 15:47:34 crc kubenswrapper[5114]: ${admin_network_policy_enabled_flag} Dec 10 15:47:34 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zkm4v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-79jfj_openshift-ovn-kubernetes(89d5aad2-7968-4ff9-a9fa-50a133a77df8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:34 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:34 crc kubenswrapper[5114]: E1210 15:47:34.574120 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" podUID="89d5aad2-7968-4ff9-a9fa-50a133a77df8" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.582042 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4f07611-baa7-42a7-8607-306ed57fb75c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://800d1520c7107344f8b6d771d0fecfb9ca2644d8efe597cabd69c5de72a571ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ec7a41d072aa02f59def36f4c2802872ef70cbd48046c3e3d6f6ccd6b254c53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c19e0260e8980b12b59f394a8355cee2eee1dc159e14081a0ff23cebdd4e9f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1daca1262ac174a242cff74011ab4da1c00a8caaf4bc44b58af5400ae24d3226\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.602922 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14d2b4c9-40f0-4dcb-ad8c-0fe4a5304563\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://85e77e659fccf9ba6e2cc6e99afbafd6be1703e401429ba871243247e0c20a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://447746eb6e190728d80f154f34d6c4c3cd6a364d95c18a4c109e1a2d00fbcf27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://251a7ed18067c8bcbcbcb38700fe905a2a4ebf34fef9f02a6ffc9f78a334bc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43234809c1296bc87d3909492e145b0720e62cf92728f1f24baeac176f8cfc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://4654b1e58183f9508823b58dc37a09482feafd97c887cc56f9d1c793999ee516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://101e3958feb79a37918d043f01289b15aa43519052915151289b2df11a4c798e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://101e3958feb79a37918d043f01289b15aa43519052915151289b2df11a4c798e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://000c0ac3fe264d2edae20d00ae4b904a9c638f104925be4c2999a32625c2384e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://000c0ac3fe264d2edae20d00ae4b904a9c638f104925be4c2999a32625c2384e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://90da8daaae30e60295160aefe8748f6cf28eda2cd17d933569c0320aebc57f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90da8daaae30e60295160aefe8748f6cf28eda2cd17d933569c0320aebc57f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.618067 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e331166d-a33f-44c1-9a3e-f43cfee598a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://c9a7475ba48862dfcb11fe65264384be264b4b7acd30761bc650e70dd3a78abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7398b71862f7cfabefc5644c5d6b4924bbde47edadad7f240aa37599d2b3da9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://55ad03eb1a337191c414a5dbd0864a29632396ff234b68505a9a4b65c90d8eb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d79fc0ad78427693b9ef01519261c475c49b29ab8dc64210c09f22886b3dcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1c010c37667d5c045e43048e4405a03d43afd6ebe7774038d9d5a5c5bb8aaf4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T15:47:00Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1210 15:46:59.465586 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 15:46:59.465755 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1210 15:46:59.466800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823188907/tls.crt::/tmp/serving-cert-3823188907/tls.key\\\\\\\"\\\\nI1210 15:47:00.080067 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 15:47:00.081594 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 15:47:00.081609 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 15:47:00.081631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 15:47:00.081635 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 15:47:00.084952 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 15:47:00.084970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 15:47:00.084974 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 15:47:00.084979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 15:47:00.084982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 15:47:00.084984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 15:47:00.084987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 15:47:00.085095 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 15:47:00.088454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:47:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f8dd78b836cacc6ac7bee1a11730500c94192df5a045eb37ae1c137a3cc0ad6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7e3d3b6b0e188659783d2b384d22a05ba8962e4fa49cd4caae040921c9add613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e3d3b6b0e188659783d2b384d22a05ba8962e4fa49cd4caae040921c9add613\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.629752 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.637978 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.646062 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.654593 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.654788 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.654900 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.654988 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.655074 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:34Z","lastTransitionTime":"2025-12-10T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.659067 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbl48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3e165c-439d-4282-b1e7-179dca439343\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbl48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.666857 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cddacc92-81b7-4948-93c5-5c47e15a9d41\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://82cf7cb8d12a0390623c03e2a919f8f30da8ac13d60bbaaca7bd32778e9816e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8822b68284631476f7526c5a6629b3cbe113320b8716837d4be7ed679ea64b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d65e5ca10eda1aed2b331dff87ea726c9ba50cfbb47bf07c74e0ce4d6d5b99b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bf99e2dd5c01828fb3db803c3d59c571d32f320bec0325579c1510965bea01ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf99e2dd5c01828fb3db803c3d59c571d32f320bec0325579c1510965bea01ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.676995 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.685263 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b38ac556-07b2-4e25-9595-6adae4fcecb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g9ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g9ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pvhhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.696123 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-lg6m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7c683ba-536f-45e5-89b0-fe14989cad13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfxbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lg6m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.704494 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gjs2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d8f4a9-0b40-486c-ac70-597d1fab05c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtlfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtlfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gjs2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.712442 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-49rgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"379e5b28-21b4-4727-a60f-0fad71bf89fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2wz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-49rgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.720834 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sg27x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a54715ec-382b-4bb8-bef2-f125ee0bae2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl62h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sg27x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.732348 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89d5aad2-7968-4ff9-a9fa-50a133a77df8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkm4v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkm4v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-79jfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.740385 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fa5e9e-e71a-458f-88e7-57d296462452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b63509d96fe3793fb1dffe2943da9a38a875dd373fbad85638d39878168af249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://108af1094b4ecac73d954933b32171f5e697d11d78490d831db63f315177de7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://108af1094b4ecac73d954933b32171f5e697d11d78490d831db63f315177de7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.751103 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.756814 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.756903 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.756918 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.756933 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.756942 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:34Z","lastTransitionTime":"2025-12-10T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.763220 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.778365 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bgfnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.860055 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.860137 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.860164 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.860195 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.860217 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:34Z","lastTransitionTime":"2025-12-10T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.962449 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.962530 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.962555 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.962585 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:34 crc kubenswrapper[5114]: I1210 15:47:34.962607 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:34Z","lastTransitionTime":"2025-12-10T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.064903 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.064973 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.064997 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.065025 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.065044 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:35Z","lastTransitionTime":"2025-12-10T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.167805 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.167864 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.167882 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.167905 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.167927 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:35Z","lastTransitionTime":"2025-12-10T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.270539 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.270623 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.270644 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.270672 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.270724 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:35Z","lastTransitionTime":"2025-12-10T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.373158 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.373217 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.373229 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.373243 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.373253 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:35Z","lastTransitionTime":"2025-12-10T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.475890 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.476755 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.476903 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.477045 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.477178 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:35Z","lastTransitionTime":"2025-12-10T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.568534 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.568723 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:47:35 crc kubenswrapper[5114]: E1210 15:47:35.568742 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 10 15:47:35 crc kubenswrapper[5114]: E1210 15:47:35.568839 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.568889 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:47:35 crc kubenswrapper[5114]: E1210 15:47:35.568939 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjs2g" podUID="48d8f4a9-0b40-486c-ac70-597d1fab05c1" Dec 10 15:47:35 crc kubenswrapper[5114]: E1210 15:47:35.570527 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:35 crc kubenswrapper[5114]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 10 15:47:35 crc kubenswrapper[5114]: if [[ -f "/env/_master" ]]; then Dec 10 15:47:35 crc kubenswrapper[5114]: set -o allexport Dec 10 15:47:35 crc kubenswrapper[5114]: source "/env/_master" Dec 10 15:47:35 crc kubenswrapper[5114]: set +o allexport Dec 10 15:47:35 crc kubenswrapper[5114]: fi Dec 10 15:47:35 crc kubenswrapper[5114]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 10 15:47:35 crc kubenswrapper[5114]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 10 15:47:35 crc kubenswrapper[5114]: ho_enable="--enable-hybrid-overlay" Dec 10 15:47:35 crc kubenswrapper[5114]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 10 15:47:35 crc kubenswrapper[5114]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 10 15:47:35 crc kubenswrapper[5114]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 10 15:47:35 crc kubenswrapper[5114]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 10 15:47:35 crc kubenswrapper[5114]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 10 15:47:35 crc kubenswrapper[5114]: --webhook-host=127.0.0.1 \ Dec 10 15:47:35 crc kubenswrapper[5114]: --webhook-port=9743 \ Dec 10 15:47:35 crc kubenswrapper[5114]: ${ho_enable} \ Dec 10 15:47:35 crc kubenswrapper[5114]: --enable-interconnect \ Dec 10 15:47:35 crc kubenswrapper[5114]: --disable-approver \ Dec 10 15:47:35 crc kubenswrapper[5114]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 10 15:47:35 crc kubenswrapper[5114]: --wait-for-kubernetes-api=200s \ Dec 10 15:47:35 crc kubenswrapper[5114]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 10 15:47:35 crc kubenswrapper[5114]: --loglevel="${LOGLEVEL}" Dec 10 15:47:35 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:35 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:35 crc kubenswrapper[5114]: E1210 15:47:35.573859 5114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8g9ft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-pvhhc_openshift-machine-config-operator(b38ac556-07b2-4e25-9595-6adae4fcecb7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 10 15:47:35 crc kubenswrapper[5114]: E1210 15:47:35.574339 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:35 crc kubenswrapper[5114]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 10 15:47:35 crc kubenswrapper[5114]: if [[ -f "/env/_master" ]]; then Dec 10 15:47:35 crc kubenswrapper[5114]: set -o allexport Dec 10 15:47:35 crc kubenswrapper[5114]: source "/env/_master" Dec 10 15:47:35 crc kubenswrapper[5114]: set +o allexport Dec 10 15:47:35 crc kubenswrapper[5114]: fi Dec 10 15:47:35 crc kubenswrapper[5114]: Dec 10 15:47:35 crc kubenswrapper[5114]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 10 15:47:35 crc kubenswrapper[5114]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 10 15:47:35 crc kubenswrapper[5114]: --disable-webhook \ Dec 10 15:47:35 crc kubenswrapper[5114]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 10 15:47:35 crc kubenswrapper[5114]: --loglevel="${LOGLEVEL}" Dec 10 15:47:35 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:35 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:35 crc kubenswrapper[5114]: E1210 15:47:35.575650 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Dec 10 15:47:35 crc kubenswrapper[5114]: E1210 15:47:35.576006 5114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8g9ft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-pvhhc_openshift-machine-config-operator(b38ac556-07b2-4e25-9595-6adae4fcecb7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 10 15:47:35 crc kubenswrapper[5114]: E1210 15:47:35.577185 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" podUID="b38ac556-07b2-4e25-9595-6adae4fcecb7" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.578545 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.578587 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.578600 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.578616 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.578627 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:35Z","lastTransitionTime":"2025-12-10T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.680587 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.680646 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.680659 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.680680 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.680693 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:35Z","lastTransitionTime":"2025-12-10T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.782737 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.782783 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.782796 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.782815 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.782827 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:35Z","lastTransitionTime":"2025-12-10T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.884613 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.884688 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.884713 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.884743 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.884769 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:35Z","lastTransitionTime":"2025-12-10T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.987392 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.987458 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.987482 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.987511 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:35 crc kubenswrapper[5114]: I1210 15:47:35.987530 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:35Z","lastTransitionTime":"2025-12-10T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.090083 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.090126 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.090135 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.090151 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.090161 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:36Z","lastTransitionTime":"2025-12-10T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.192384 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.192472 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.192512 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.192544 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.192582 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:36Z","lastTransitionTime":"2025-12-10T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.295192 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.295333 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.295361 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.295390 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.295409 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:36Z","lastTransitionTime":"2025-12-10T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.397364 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.397452 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.397482 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.397515 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.397542 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:36Z","lastTransitionTime":"2025-12-10T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.499098 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.499147 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.499156 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.499171 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.499182 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:36Z","lastTransitionTime":"2025-12-10T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.568408 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:47:36 crc kubenswrapper[5114]: E1210 15:47:36.568601 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 10 15:47:36 crc kubenswrapper[5114]: E1210 15:47:36.570222 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:36 crc kubenswrapper[5114]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Dec 10 15:47:36 crc kubenswrapper[5114]: set -uo pipefail Dec 10 15:47:36 crc kubenswrapper[5114]: Dec 10 15:47:36 crc kubenswrapper[5114]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 10 15:47:36 crc kubenswrapper[5114]: Dec 10 15:47:36 crc kubenswrapper[5114]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 10 15:47:36 crc kubenswrapper[5114]: HOSTS_FILE="/etc/hosts" Dec 10 15:47:36 crc kubenswrapper[5114]: TEMP_FILE="/tmp/hosts.tmp" Dec 10 15:47:36 crc kubenswrapper[5114]: Dec 10 15:47:36 crc kubenswrapper[5114]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 10 15:47:36 crc kubenswrapper[5114]: Dec 10 15:47:36 crc kubenswrapper[5114]: # Make a temporary file with the old hosts file's attributes. Dec 10 15:47:36 crc kubenswrapper[5114]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 10 15:47:36 crc kubenswrapper[5114]: echo "Failed to preserve hosts file. Exiting." Dec 10 15:47:36 crc kubenswrapper[5114]: exit 1 Dec 10 15:47:36 crc kubenswrapper[5114]: fi Dec 10 15:47:36 crc kubenswrapper[5114]: Dec 10 15:47:36 crc kubenswrapper[5114]: while true; do Dec 10 15:47:36 crc kubenswrapper[5114]: declare -A svc_ips Dec 10 15:47:36 crc kubenswrapper[5114]: for svc in "${services[@]}"; do Dec 10 15:47:36 crc kubenswrapper[5114]: # Fetch service IP from cluster dns if present. We make several tries Dec 10 15:47:36 crc kubenswrapper[5114]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 10 15:47:36 crc kubenswrapper[5114]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 10 15:47:36 crc kubenswrapper[5114]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 10 15:47:36 crc kubenswrapper[5114]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 10 15:47:36 crc kubenswrapper[5114]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 10 15:47:36 crc kubenswrapper[5114]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 10 15:47:36 crc kubenswrapper[5114]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 10 15:47:36 crc kubenswrapper[5114]: for i in ${!cmds[*]} Dec 10 15:47:36 crc kubenswrapper[5114]: do Dec 10 15:47:36 crc kubenswrapper[5114]: ips=($(eval "${cmds[i]}")) Dec 10 15:47:36 crc kubenswrapper[5114]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 10 15:47:36 crc kubenswrapper[5114]: svc_ips["${svc}"]="${ips[@]}" Dec 10 15:47:36 crc kubenswrapper[5114]: break Dec 10 15:47:36 crc kubenswrapper[5114]: fi Dec 10 15:47:36 crc kubenswrapper[5114]: done Dec 10 15:47:36 crc kubenswrapper[5114]: done Dec 10 15:47:36 crc kubenswrapper[5114]: Dec 10 15:47:36 crc kubenswrapper[5114]: # Update /etc/hosts only if we get valid service IPs Dec 10 15:47:36 crc kubenswrapper[5114]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 10 15:47:36 crc kubenswrapper[5114]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 10 15:47:36 crc kubenswrapper[5114]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 10 15:47:36 crc kubenswrapper[5114]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 10 15:47:36 crc kubenswrapper[5114]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 10 15:47:36 crc kubenswrapper[5114]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 10 15:47:36 crc kubenswrapper[5114]: sleep 60 & wait Dec 10 15:47:36 crc kubenswrapper[5114]: continue Dec 10 15:47:36 crc kubenswrapper[5114]: fi Dec 10 15:47:36 crc kubenswrapper[5114]: Dec 10 15:47:36 crc kubenswrapper[5114]: # Append resolver entries for services Dec 10 15:47:36 crc kubenswrapper[5114]: rc=0 Dec 10 15:47:36 crc kubenswrapper[5114]: for svc in "${!svc_ips[@]}"; do Dec 10 15:47:36 crc kubenswrapper[5114]: for ip in ${svc_ips[${svc}]}; do Dec 10 15:47:36 crc kubenswrapper[5114]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 10 15:47:36 crc kubenswrapper[5114]: done Dec 10 15:47:36 crc kubenswrapper[5114]: done Dec 10 15:47:36 crc kubenswrapper[5114]: if [[ $rc -ne 0 ]]; then Dec 10 15:47:36 crc kubenswrapper[5114]: sleep 60 & wait Dec 10 15:47:36 crc kubenswrapper[5114]: continue Dec 10 15:47:36 crc kubenswrapper[5114]: fi Dec 10 15:47:36 crc kubenswrapper[5114]: Dec 10 15:47:36 crc kubenswrapper[5114]: Dec 10 15:47:36 crc kubenswrapper[5114]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 10 15:47:36 crc kubenswrapper[5114]: # Replace /etc/hosts with our modified version if needed Dec 10 15:47:36 crc kubenswrapper[5114]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 10 15:47:36 crc kubenswrapper[5114]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 10 15:47:36 crc kubenswrapper[5114]: fi Dec 10 15:47:36 crc kubenswrapper[5114]: sleep 60 & wait Dec 10 15:47:36 crc kubenswrapper[5114]: unset svc_ips Dec 10 15:47:36 crc kubenswrapper[5114]: done Dec 10 15:47:36 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j2wz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-49rgv_openshift-dns(379e5b28-21b4-4727-a60f-0fad71bf89fa): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:36 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:36 crc kubenswrapper[5114]: E1210 15:47:36.570518 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:36 crc kubenswrapper[5114]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Dec 10 15:47:36 crc kubenswrapper[5114]: set -o allexport Dec 10 15:47:36 crc kubenswrapper[5114]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 10 15:47:36 crc kubenswrapper[5114]: source /etc/kubernetes/apiserver-url.env Dec 10 15:47:36 crc kubenswrapper[5114]: else Dec 10 15:47:36 crc kubenswrapper[5114]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 10 15:47:36 crc kubenswrapper[5114]: exit 1 Dec 10 15:47:36 crc kubenswrapper[5114]: fi Dec 10 15:47:36 crc kubenswrapper[5114]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 10 15:47:36 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:36 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:36 crc kubenswrapper[5114]: E1210 15:47:36.570882 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:36 crc kubenswrapper[5114]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Dec 10 15:47:36 crc kubenswrapper[5114]: while [ true ]; Dec 10 15:47:36 crc kubenswrapper[5114]: do Dec 10 15:47:36 crc kubenswrapper[5114]: for f in $(ls /tmp/serviceca); do Dec 10 15:47:36 crc kubenswrapper[5114]: echo $f Dec 10 15:47:36 crc kubenswrapper[5114]: ca_file_path="/tmp/serviceca/${f}" Dec 10 15:47:36 crc kubenswrapper[5114]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Dec 10 15:47:36 crc kubenswrapper[5114]: reg_dir_path="/etc/docker/certs.d/${f}" Dec 10 15:47:36 crc kubenswrapper[5114]: if [ -e "${reg_dir_path}" ]; then Dec 10 15:47:36 crc kubenswrapper[5114]: cp -u $ca_file_path $reg_dir_path/ca.crt Dec 10 15:47:36 crc kubenswrapper[5114]: else Dec 10 15:47:36 crc kubenswrapper[5114]: mkdir $reg_dir_path Dec 10 15:47:36 crc kubenswrapper[5114]: cp $ca_file_path $reg_dir_path/ca.crt Dec 10 15:47:36 crc kubenswrapper[5114]: fi Dec 10 15:47:36 crc kubenswrapper[5114]: done Dec 10 15:47:36 crc kubenswrapper[5114]: for d in $(ls /etc/docker/certs.d); do Dec 10 15:47:36 crc kubenswrapper[5114]: echo $d Dec 10 15:47:36 crc kubenswrapper[5114]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Dec 10 15:47:36 crc kubenswrapper[5114]: reg_conf_path="/tmp/serviceca/${dp}" Dec 10 15:47:36 crc kubenswrapper[5114]: if [ ! -e "${reg_conf_path}" ]; then Dec 10 15:47:36 crc kubenswrapper[5114]: rm -rf /etc/docker/certs.d/$d Dec 10 15:47:36 crc kubenswrapper[5114]: fi Dec 10 15:47:36 crc kubenswrapper[5114]: done Dec 10 15:47:36 crc kubenswrapper[5114]: sleep 60 & wait ${!} Dec 10 15:47:36 crc kubenswrapper[5114]: done Dec 10 15:47:36 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xl62h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-sg27x_openshift-image-registry(a54715ec-382b-4bb8-bef2-f125ee0bae2b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:36 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:36 crc kubenswrapper[5114]: E1210 15:47:36.571741 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Dec 10 15:47:36 crc kubenswrapper[5114]: E1210 15:47:36.571773 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-49rgv" podUID="379e5b28-21b4-4727-a60f-0fad71bf89fa" Dec 10 15:47:36 crc kubenswrapper[5114]: E1210 15:47:36.572444 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-sg27x" podUID="a54715ec-382b-4bb8-bef2-f125ee0bae2b" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.601096 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.601131 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.601141 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.601154 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.601163 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:36Z","lastTransitionTime":"2025-12-10T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.703522 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.703629 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.703668 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.703695 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.703717 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:36Z","lastTransitionTime":"2025-12-10T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.721955 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.722032 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.722050 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.722074 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.722090 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:36Z","lastTransitionTime":"2025-12-10T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:36 crc kubenswrapper[5114]: E1210 15:47:36.738035 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1983090-c631-42b8-889c-661e5120de50\\\",\\\"systemUUID\\\":\\\"ea4de44f-fffe-48de-b641-4c0ea71eb3ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.743526 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.743650 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.743672 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.743730 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.743749 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:36Z","lastTransitionTime":"2025-12-10T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:36 crc kubenswrapper[5114]: E1210 15:47:36.760476 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1983090-c631-42b8-889c-661e5120de50\\\",\\\"systemUUID\\\":\\\"ea4de44f-fffe-48de-b641-4c0ea71eb3ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.771156 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.771292 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.771312 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.771340 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.771359 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:36Z","lastTransitionTime":"2025-12-10T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:36 crc kubenswrapper[5114]: E1210 15:47:36.785855 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1983090-c631-42b8-889c-661e5120de50\\\",\\\"systemUUID\\\":\\\"ea4de44f-fffe-48de-b641-4c0ea71eb3ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.791367 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.791444 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.791472 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.791507 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.791539 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:36Z","lastTransitionTime":"2025-12-10T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:36 crc kubenswrapper[5114]: E1210 15:47:36.802925 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1983090-c631-42b8-889c-661e5120de50\\\",\\\"systemUUID\\\":\\\"ea4de44f-fffe-48de-b641-4c0ea71eb3ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.807666 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.807764 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.807788 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.807816 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.807868 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:36Z","lastTransitionTime":"2025-12-10T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:36 crc kubenswrapper[5114]: E1210 15:47:36.824255 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1983090-c631-42b8-889c-661e5120de50\\\",\\\"systemUUID\\\":\\\"ea4de44f-fffe-48de-b641-4c0ea71eb3ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:36 crc kubenswrapper[5114]: E1210 15:47:36.824527 5114 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.826061 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.826104 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.826121 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.826141 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.826157 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:36Z","lastTransitionTime":"2025-12-10T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.928846 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.928951 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.928990 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.929019 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:36 crc kubenswrapper[5114]: I1210 15:47:36.929040 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:36Z","lastTransitionTime":"2025-12-10T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.032178 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.032248 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.032313 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.032349 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.032372 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:37Z","lastTransitionTime":"2025-12-10T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.135355 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.135473 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.135500 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.135532 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.135554 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:37Z","lastTransitionTime":"2025-12-10T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.238225 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.238321 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.238340 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.238364 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.238384 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:37Z","lastTransitionTime":"2025-12-10T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.239485 5114 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.341110 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.341173 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.341194 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.341234 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.341334 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:37Z","lastTransitionTime":"2025-12-10T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.350102 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.350269 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:47:37 crc kubenswrapper[5114]: E1210 15:47:37.350331 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:47:53.350289576 +0000 UTC m=+99.071090763 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:47:37 crc kubenswrapper[5114]: E1210 15:47:37.350390 5114 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.350408 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:47:37 crc kubenswrapper[5114]: E1210 15:47:37.350450 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-10 15:47:53.35043454 +0000 UTC m=+99.071235727 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 15:47:37 crc kubenswrapper[5114]: E1210 15:47:37.350483 5114 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 15:47:37 crc kubenswrapper[5114]: E1210 15:47:37.350523 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-10 15:47:53.350514842 +0000 UTC m=+99.071316039 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.444133 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.444342 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.444384 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.444417 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.444440 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:37Z","lastTransitionTime":"2025-12-10T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.450917 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48d8f4a9-0b40-486c-ac70-597d1fab05c1-metrics-certs\") pod \"network-metrics-daemon-gjs2g\" (UID: \"48d8f4a9-0b40-486c-ac70-597d1fab05c1\") " pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.450986 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.451057 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:47:37 crc kubenswrapper[5114]: E1210 15:47:37.451104 5114 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 15:47:37 crc kubenswrapper[5114]: E1210 15:47:37.451168 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 15:47:37 crc kubenswrapper[5114]: E1210 15:47:37.451193 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 15:47:37 crc kubenswrapper[5114]: E1210 15:47:37.451205 5114 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:47:37 crc kubenswrapper[5114]: E1210 15:47:37.451205 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48d8f4a9-0b40-486c-ac70-597d1fab05c1-metrics-certs podName:48d8f4a9-0b40-486c-ac70-597d1fab05c1 nodeName:}" failed. No retries permitted until 2025-12-10 15:47:53.451173343 +0000 UTC m=+99.171974560 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/48d8f4a9-0b40-486c-ac70-597d1fab05c1-metrics-certs") pod "network-metrics-daemon-gjs2g" (UID: "48d8f4a9-0b40-486c-ac70-597d1fab05c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 15:47:37 crc kubenswrapper[5114]: E1210 15:47:37.451299 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-10 15:47:53.451255135 +0000 UTC m=+99.172056312 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:47:37 crc kubenswrapper[5114]: E1210 15:47:37.451382 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 15:47:37 crc kubenswrapper[5114]: E1210 15:47:37.451416 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 15:47:37 crc kubenswrapper[5114]: E1210 15:47:37.451435 5114 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:47:37 crc kubenswrapper[5114]: E1210 15:47:37.451583 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-10 15:47:53.451561603 +0000 UTC m=+99.172362820 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.547114 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.547168 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.547186 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.547210 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.547231 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:37Z","lastTransitionTime":"2025-12-10T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.567928 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.568358 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:47:37 crc kubenswrapper[5114]: E1210 15:47:37.568348 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 10 15:47:37 crc kubenswrapper[5114]: E1210 15:47:37.568526 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjs2g" podUID="48d8f4a9-0b40-486c-ac70-597d1fab05c1" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.568835 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:47:37 crc kubenswrapper[5114]: E1210 15:47:37.569065 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 10 15:47:37 crc kubenswrapper[5114]: E1210 15:47:37.570413 5114 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j9xxc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-wbl48_openshift-multus(3a3e165c-439d-4282-b1e7-179dca439343): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 10 15:47:37 crc kubenswrapper[5114]: E1210 15:47:37.571340 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:37 crc kubenswrapper[5114]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Dec 10 15:47:37 crc kubenswrapper[5114]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Dec 10 15:47:37 crc kubenswrapper[5114]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sfxbp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-lg6m5_openshift-multus(e7c683ba-536f-45e5-89b0-fe14989cad13): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:37 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:37 crc kubenswrapper[5114]: E1210 15:47:37.571953 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-wbl48" podUID="3a3e165c-439d-4282-b1e7-179dca439343" Dec 10 15:47:37 crc kubenswrapper[5114]: E1210 15:47:37.573115 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-lg6m5" podUID="e7c683ba-536f-45e5-89b0-fe14989cad13" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.682080 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.682153 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.682180 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.682214 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.682235 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:37Z","lastTransitionTime":"2025-12-10T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.784497 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.784554 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.784583 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.784634 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.784657 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:37Z","lastTransitionTime":"2025-12-10T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.886701 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.886793 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.886821 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.886860 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.886887 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:37Z","lastTransitionTime":"2025-12-10T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.989309 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.989389 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.989415 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.989446 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:37 crc kubenswrapper[5114]: I1210 15:47:37.989467 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:37Z","lastTransitionTime":"2025-12-10T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.092216 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.092314 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.092335 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.092397 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.092419 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:38Z","lastTransitionTime":"2025-12-10T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.194379 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.194457 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.194486 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.194518 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.194545 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:38Z","lastTransitionTime":"2025-12-10T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.296888 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.297018 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.297039 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.297086 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.297104 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:38Z","lastTransitionTime":"2025-12-10T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.400252 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.400363 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.400447 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.400478 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.400594 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:38Z","lastTransitionTime":"2025-12-10T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.502604 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.502660 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.502679 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.502702 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.502722 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:38Z","lastTransitionTime":"2025-12-10T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.568486 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:47:38 crc kubenswrapper[5114]: E1210 15:47:38.568631 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 10 15:47:38 crc kubenswrapper[5114]: E1210 15:47:38.570800 5114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 10 15:47:38 crc kubenswrapper[5114]: E1210 15:47:38.571936 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.605500 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.605574 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.605616 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.605661 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.605691 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:38Z","lastTransitionTime":"2025-12-10T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.707795 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.707838 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.707849 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.707864 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.707876 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:38Z","lastTransitionTime":"2025-12-10T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.809616 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.809692 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.809720 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.809749 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.809772 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:38Z","lastTransitionTime":"2025-12-10T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.911931 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.911990 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.912007 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.912029 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:38 crc kubenswrapper[5114]: I1210 15:47:38.912045 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:38Z","lastTransitionTime":"2025-12-10T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.014982 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.015054 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.015079 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.015109 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.015131 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:39Z","lastTransitionTime":"2025-12-10T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.117609 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.117755 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.117842 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.117928 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.117963 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:39Z","lastTransitionTime":"2025-12-10T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.220963 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.221070 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.221098 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.221129 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.221148 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:39Z","lastTransitionTime":"2025-12-10T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.323441 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.323514 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.323540 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.323574 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.323596 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:39Z","lastTransitionTime":"2025-12-10T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.426124 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.426190 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.426210 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.426232 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.426246 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:39Z","lastTransitionTime":"2025-12-10T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.529001 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.529095 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.529122 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.529154 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.529174 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:39Z","lastTransitionTime":"2025-12-10T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.568327 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.568355 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.568357 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:47:39 crc kubenswrapper[5114]: E1210 15:47:39.568558 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 10 15:47:39 crc kubenswrapper[5114]: E1210 15:47:39.568689 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 10 15:47:39 crc kubenswrapper[5114]: E1210 15:47:39.568862 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjs2g" podUID="48d8f4a9-0b40-486c-ac70-597d1fab05c1" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.632197 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.632315 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.632341 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.632366 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.632385 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:39Z","lastTransitionTime":"2025-12-10T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.735496 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.735572 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.735596 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.735624 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.735647 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:39Z","lastTransitionTime":"2025-12-10T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.838938 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.839012 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.839035 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.839064 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.839087 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:39Z","lastTransitionTime":"2025-12-10T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.941896 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.941965 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.941988 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.942019 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:39 crc kubenswrapper[5114]: I1210 15:47:39.942041 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:39Z","lastTransitionTime":"2025-12-10T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.044403 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.044758 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.044797 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.044821 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.044839 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:40Z","lastTransitionTime":"2025-12-10T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.147667 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.147746 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.147771 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.147801 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.147823 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:40Z","lastTransitionTime":"2025-12-10T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.250567 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.250637 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.250656 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.250698 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.250717 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:40Z","lastTransitionTime":"2025-12-10T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.353672 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.353772 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.353805 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.353837 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.353863 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:40Z","lastTransitionTime":"2025-12-10T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.456677 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.456754 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.456782 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.456815 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.456845 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:40Z","lastTransitionTime":"2025-12-10T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.520562 5114 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.559692 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.559776 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.559798 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.559825 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.559844 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:40Z","lastTransitionTime":"2025-12-10T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.568112 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:47:40 crc kubenswrapper[5114]: E1210 15:47:40.568309 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.666159 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.666438 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.666563 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.667373 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.667453 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:40Z","lastTransitionTime":"2025-12-10T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.770356 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.770432 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.770452 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.770477 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.770495 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:40Z","lastTransitionTime":"2025-12-10T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.873104 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.873166 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.873185 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.873207 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.873225 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:40Z","lastTransitionTime":"2025-12-10T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.976042 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.976092 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.976105 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.976125 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:40 crc kubenswrapper[5114]: I1210 15:47:40.976139 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:40Z","lastTransitionTime":"2025-12-10T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.078476 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.078551 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.078566 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.078590 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.078606 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:41Z","lastTransitionTime":"2025-12-10T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.181160 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.181244 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.181274 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.181351 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.181374 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:41Z","lastTransitionTime":"2025-12-10T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.283510 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.283595 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.283622 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.283649 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.283667 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:41Z","lastTransitionTime":"2025-12-10T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.386563 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.386635 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.386653 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.386677 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.386696 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:41Z","lastTransitionTime":"2025-12-10T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.489661 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.489755 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.489785 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.489841 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.489870 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:41Z","lastTransitionTime":"2025-12-10T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.568199 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.568441 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.568497 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:47:41 crc kubenswrapper[5114]: E1210 15:47:41.568582 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 10 15:47:41 crc kubenswrapper[5114]: E1210 15:47:41.568438 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 10 15:47:41 crc kubenswrapper[5114]: E1210 15:47:41.568701 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjs2g" podUID="48d8f4a9-0b40-486c-ac70-597d1fab05c1" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.592360 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.592518 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.592550 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.592629 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.592657 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:41Z","lastTransitionTime":"2025-12-10T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.695236 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.695323 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.695343 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.695364 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.695380 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:41Z","lastTransitionTime":"2025-12-10T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.797293 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.797336 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.797351 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.797368 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.797379 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:41Z","lastTransitionTime":"2025-12-10T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.899289 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.899335 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.899351 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.899368 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:41 crc kubenswrapper[5114]: I1210 15:47:41.899378 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:41Z","lastTransitionTime":"2025-12-10T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.001487 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.001528 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.001545 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.001559 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.001570 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:42Z","lastTransitionTime":"2025-12-10T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.103567 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.103620 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.103633 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.103650 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.103663 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:42Z","lastTransitionTime":"2025-12-10T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.206084 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.206121 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.206129 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.206142 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.206161 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:42Z","lastTransitionTime":"2025-12-10T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.308215 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.308330 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.308358 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.308432 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.308546 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:42Z","lastTransitionTime":"2025-12-10T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.410760 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.410806 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.410818 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.410837 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.410849 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:42Z","lastTransitionTime":"2025-12-10T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.512857 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.512911 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.512932 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.512948 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.512960 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:42Z","lastTransitionTime":"2025-12-10T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.568053 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:47:42 crc kubenswrapper[5114]: E1210 15:47:42.568309 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.615741 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.615805 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.615823 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.615844 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.615862 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:42Z","lastTransitionTime":"2025-12-10T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.718141 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.718221 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.718247 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.718323 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.718353 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:42Z","lastTransitionTime":"2025-12-10T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.820767 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.820849 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.820867 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.820890 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.820909 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:42Z","lastTransitionTime":"2025-12-10T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.923208 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.923248 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.923260 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.923294 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:42 crc kubenswrapper[5114]: I1210 15:47:42.923307 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:42Z","lastTransitionTime":"2025-12-10T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.025960 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.026043 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.026070 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.026103 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.026127 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:43Z","lastTransitionTime":"2025-12-10T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.129022 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.129091 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.129115 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.129147 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.129170 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:43Z","lastTransitionTime":"2025-12-10T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.231569 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.231631 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.231643 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.231657 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.231667 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:43Z","lastTransitionTime":"2025-12-10T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.334537 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.334631 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.334658 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.334688 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.334711 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:43Z","lastTransitionTime":"2025-12-10T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.436947 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.437101 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.437127 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.437158 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.437185 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:43Z","lastTransitionTime":"2025-12-10T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.540419 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.540530 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.540558 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.540588 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.540612 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:43Z","lastTransitionTime":"2025-12-10T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.568400 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.568443 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.568453 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:47:43 crc kubenswrapper[5114]: E1210 15:47:43.568588 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 10 15:47:43 crc kubenswrapper[5114]: E1210 15:47:43.568729 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjs2g" podUID="48d8f4a9-0b40-486c-ac70-597d1fab05c1" Dec 10 15:47:43 crc kubenswrapper[5114]: E1210 15:47:43.568808 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.642619 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.642687 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.642710 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.642739 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.642761 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:43Z","lastTransitionTime":"2025-12-10T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.744553 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.744621 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.744643 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.744667 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.744686 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:43Z","lastTransitionTime":"2025-12-10T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.847076 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.847167 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.847186 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.847211 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.847229 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:43Z","lastTransitionTime":"2025-12-10T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.949774 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.949851 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.949868 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.949901 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:43 crc kubenswrapper[5114]: I1210 15:47:43.949925 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:43Z","lastTransitionTime":"2025-12-10T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.052009 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.052080 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.052091 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.052108 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.052119 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:44Z","lastTransitionTime":"2025-12-10T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.154356 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.154418 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.154434 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.154450 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.154462 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:44Z","lastTransitionTime":"2025-12-10T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.257447 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.257515 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.257536 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.257560 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.257577 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:44Z","lastTransitionTime":"2025-12-10T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.360327 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.360399 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.360422 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.360444 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.360462 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:44Z","lastTransitionTime":"2025-12-10T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.462576 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.462627 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.462639 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.462656 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.462668 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:44Z","lastTransitionTime":"2025-12-10T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.565509 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.565587 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.565611 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.565652 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.565676 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:44Z","lastTransitionTime":"2025-12-10T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.572726 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:47:44 crc kubenswrapper[5114]: E1210 15:47:44.573141 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.592420 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4f07611-baa7-42a7-8607-306ed57fb75c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://800d1520c7107344f8b6d771d0fecfb9ca2644d8efe597cabd69c5de72a571ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ec7a41d072aa02f59def36f4c2802872ef70cbd48046c3e3d6f6ccd6b254c53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c19e0260e8980b12b59f394a8355cee2eee1dc159e14081a0ff23cebdd4e9f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1daca1262ac174a242cff74011ab4da1c00a8caaf4bc44b58af5400ae24d3226\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.621688 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14d2b4c9-40f0-4dcb-ad8c-0fe4a5304563\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://85e77e659fccf9ba6e2cc6e99afbafd6be1703e401429ba871243247e0c20a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://447746eb6e190728d80f154f34d6c4c3cd6a364d95c18a4c109e1a2d00fbcf27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://251a7ed18067c8bcbcbcb38700fe905a2a4ebf34fef9f02a6ffc9f78a334bc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43234809c1296bc87d3909492e145b0720e62cf92728f1f24baeac176f8cfc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://4654b1e58183f9508823b58dc37a09482feafd97c887cc56f9d1c793999ee516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://101e3958feb79a37918d043f01289b15aa43519052915151289b2df11a4c798e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://101e3958feb79a37918d043f01289b15aa43519052915151289b2df11a4c798e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://000c0ac3fe264d2edae20d00ae4b904a9c638f104925be4c2999a32625c2384e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://000c0ac3fe264d2edae20d00ae4b904a9c638f104925be4c2999a32625c2384e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://90da8daaae30e60295160aefe8748f6cf28eda2cd17d933569c0320aebc57f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90da8daaae30e60295160aefe8748f6cf28eda2cd17d933569c0320aebc57f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.644749 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e331166d-a33f-44c1-9a3e-f43cfee598a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://c9a7475ba48862dfcb11fe65264384be264b4b7acd30761bc650e70dd3a78abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7398b71862f7cfabefc5644c5d6b4924bbde47edadad7f240aa37599d2b3da9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://55ad03eb1a337191c414a5dbd0864a29632396ff234b68505a9a4b65c90d8eb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d79fc0ad78427693b9ef01519261c475c49b29ab8dc64210c09f22886b3dcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1c010c37667d5c045e43048e4405a03d43afd6ebe7774038d9d5a5c5bb8aaf4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T15:47:00Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1210 15:46:59.465586 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 15:46:59.465755 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1210 15:46:59.466800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823188907/tls.crt::/tmp/serving-cert-3823188907/tls.key\\\\\\\"\\\\nI1210 15:47:00.080067 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 15:47:00.081594 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 15:47:00.081609 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 15:47:00.081631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 15:47:00.081635 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 15:47:00.084952 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 15:47:00.084970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 15:47:00.084974 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 15:47:00.084979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 15:47:00.084982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 15:47:00.084984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 15:47:00.084987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 15:47:00.085095 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 15:47:00.088454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:47:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f8dd78b836cacc6ac7bee1a11730500c94192df5a045eb37ae1c137a3cc0ad6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7e3d3b6b0e188659783d2b384d22a05ba8962e4fa49cd4caae040921c9add613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e3d3b6b0e188659783d2b384d22a05ba8962e4fa49cd4caae040921c9add613\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.659685 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.668055 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.668115 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.668127 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.668142 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.668153 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:44Z","lastTransitionTime":"2025-12-10T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.677112 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.691822 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.710820 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbl48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3e165c-439d-4282-b1e7-179dca439343\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbl48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.727422 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cddacc92-81b7-4948-93c5-5c47e15a9d41\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://82cf7cb8d12a0390623c03e2a919f8f30da8ac13d60bbaaca7bd32778e9816e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8822b68284631476f7526c5a6629b3cbe113320b8716837d4be7ed679ea64b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d65e5ca10eda1aed2b331dff87ea726c9ba50cfbb47bf07c74e0ce4d6d5b99b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bf99e2dd5c01828fb3db803c3d59c571d32f320bec0325579c1510965bea01ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf99e2dd5c01828fb3db803c3d59c571d32f320bec0325579c1510965bea01ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.742088 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.755751 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b38ac556-07b2-4e25-9595-6adae4fcecb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g9ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g9ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pvhhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.768886 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-lg6m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7c683ba-536f-45e5-89b0-fe14989cad13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfxbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lg6m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.770371 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.770558 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.770585 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.770606 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.770624 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:44Z","lastTransitionTime":"2025-12-10T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.777833 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gjs2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d8f4a9-0b40-486c-ac70-597d1fab05c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtlfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtlfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gjs2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.787475 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-49rgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"379e5b28-21b4-4727-a60f-0fad71bf89fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2wz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-49rgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.797863 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sg27x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a54715ec-382b-4bb8-bef2-f125ee0bae2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl62h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sg27x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.810250 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89d5aad2-7968-4ff9-a9fa-50a133a77df8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkm4v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkm4v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-79jfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.820688 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fa5e9e-e71a-458f-88e7-57d296462452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b63509d96fe3793fb1dffe2943da9a38a875dd373fbad85638d39878168af249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://108af1094b4ecac73d954933b32171f5e697d11d78490d831db63f315177de7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://108af1094b4ecac73d954933b32171f5e697d11d78490d831db63f315177de7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.831101 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.843572 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.867955 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bgfnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.872169 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.872268 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.872296 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.872309 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.872318 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:44Z","lastTransitionTime":"2025-12-10T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.974139 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.974202 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.974220 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.974240 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:44 crc kubenswrapper[5114]: I1210 15:47:44.974259 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:44Z","lastTransitionTime":"2025-12-10T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.076123 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.076301 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.076317 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.076333 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.076343 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:45Z","lastTransitionTime":"2025-12-10T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.178502 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.178583 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.178610 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.178656 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.178681 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:45Z","lastTransitionTime":"2025-12-10T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.280394 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.280442 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.280455 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.280471 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.280483 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:45Z","lastTransitionTime":"2025-12-10T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.383306 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.383360 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.383375 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.383393 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.383404 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:45Z","lastTransitionTime":"2025-12-10T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.485407 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.485466 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.485482 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.485506 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.485520 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:45Z","lastTransitionTime":"2025-12-10T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.568600 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.568677 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.569472 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:47:45 crc kubenswrapper[5114]: E1210 15:47:45.569508 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 10 15:47:45 crc kubenswrapper[5114]: E1210 15:47:45.569608 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjs2g" podUID="48d8f4a9-0b40-486c-ac70-597d1fab05c1" Dec 10 15:47:45 crc kubenswrapper[5114]: E1210 15:47:45.569709 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.587475 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.587754 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.587831 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.587909 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.587976 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:45Z","lastTransitionTime":"2025-12-10T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.690398 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.690496 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.690516 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.690540 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.690557 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:45Z","lastTransitionTime":"2025-12-10T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.792890 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.793258 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.793431 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.793586 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.793714 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:45Z","lastTransitionTime":"2025-12-10T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.896934 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.897475 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.897576 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.897672 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:45 crc kubenswrapper[5114]: I1210 15:47:45.897769 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:45Z","lastTransitionTime":"2025-12-10T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.000670 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.000731 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.000749 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.000776 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.000809 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:46Z","lastTransitionTime":"2025-12-10T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.102839 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.103181 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.103382 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.103569 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.103703 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:46Z","lastTransitionTime":"2025-12-10T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.205779 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.206099 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.206235 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.206422 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.206630 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:46Z","lastTransitionTime":"2025-12-10T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.309450 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.309638 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.309678 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.309719 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.309744 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:46Z","lastTransitionTime":"2025-12-10T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.412656 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.412737 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.412763 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.412786 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.412804 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:46Z","lastTransitionTime":"2025-12-10T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.514948 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.514995 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.515006 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.515019 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.515027 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:46Z","lastTransitionTime":"2025-12-10T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.568229 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:47:46 crc kubenswrapper[5114]: E1210 15:47:46.568404 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 10 15:47:46 crc kubenswrapper[5114]: E1210 15:47:46.570077 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:46 crc kubenswrapper[5114]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 10 15:47:46 crc kubenswrapper[5114]: if [[ -f "/env/_master" ]]; then Dec 10 15:47:46 crc kubenswrapper[5114]: set -o allexport Dec 10 15:47:46 crc kubenswrapper[5114]: source "/env/_master" Dec 10 15:47:46 crc kubenswrapper[5114]: set +o allexport Dec 10 15:47:46 crc kubenswrapper[5114]: fi Dec 10 15:47:46 crc kubenswrapper[5114]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 10 15:47:46 crc kubenswrapper[5114]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 10 15:47:46 crc kubenswrapper[5114]: ho_enable="--enable-hybrid-overlay" Dec 10 15:47:46 crc kubenswrapper[5114]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 10 15:47:46 crc kubenswrapper[5114]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 10 15:47:46 crc kubenswrapper[5114]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 10 15:47:46 crc kubenswrapper[5114]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 10 15:47:46 crc kubenswrapper[5114]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 10 15:47:46 crc kubenswrapper[5114]: --webhook-host=127.0.0.1 \ Dec 10 15:47:46 crc kubenswrapper[5114]: --webhook-port=9743 \ Dec 10 15:47:46 crc kubenswrapper[5114]: ${ho_enable} \ Dec 10 15:47:46 crc kubenswrapper[5114]: --enable-interconnect \ Dec 10 15:47:46 crc kubenswrapper[5114]: --disable-approver \ Dec 10 15:47:46 crc kubenswrapper[5114]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 10 15:47:46 crc kubenswrapper[5114]: --wait-for-kubernetes-api=200s \ Dec 10 15:47:46 crc kubenswrapper[5114]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 10 15:47:46 crc kubenswrapper[5114]: --loglevel="${LOGLEVEL}" Dec 10 15:47:46 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:46 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:46 crc kubenswrapper[5114]: E1210 15:47:46.571936 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:46 crc kubenswrapper[5114]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 10 15:47:46 crc kubenswrapper[5114]: if [[ -f "/env/_master" ]]; then Dec 10 15:47:46 crc kubenswrapper[5114]: set -o allexport Dec 10 15:47:46 crc kubenswrapper[5114]: source "/env/_master" Dec 10 15:47:46 crc kubenswrapper[5114]: set +o allexport Dec 10 15:47:46 crc kubenswrapper[5114]: fi Dec 10 15:47:46 crc kubenswrapper[5114]: Dec 10 15:47:46 crc kubenswrapper[5114]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 10 15:47:46 crc kubenswrapper[5114]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 10 15:47:46 crc kubenswrapper[5114]: --disable-webhook \ Dec 10 15:47:46 crc kubenswrapper[5114]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 10 15:47:46 crc kubenswrapper[5114]: --loglevel="${LOGLEVEL}" Dec 10 15:47:46 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:46 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:46 crc kubenswrapper[5114]: E1210 15:47:46.573109 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.617636 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.617685 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.617694 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.617709 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.617719 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:46Z","lastTransitionTime":"2025-12-10T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.719340 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.719395 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.719407 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.719425 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.719438 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:46Z","lastTransitionTime":"2025-12-10T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.822018 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.822254 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.822352 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.822455 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.822554 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:46Z","lastTransitionTime":"2025-12-10T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.826232 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.826336 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.826404 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.826497 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.826585 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:46Z","lastTransitionTime":"2025-12-10T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:46 crc kubenswrapper[5114]: E1210 15:47:46.835251 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1983090-c631-42b8-889c-661e5120de50\\\",\\\"systemUUID\\\":\\\"ea4de44f-fffe-48de-b641-4c0ea71eb3ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.839657 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.839707 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.839721 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.839738 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.839751 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:46Z","lastTransitionTime":"2025-12-10T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:46 crc kubenswrapper[5114]: E1210 15:47:46.848949 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1983090-c631-42b8-889c-661e5120de50\\\",\\\"systemUUID\\\":\\\"ea4de44f-fffe-48de-b641-4c0ea71eb3ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.852984 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.853041 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.853058 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.853080 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.853095 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:46Z","lastTransitionTime":"2025-12-10T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:46 crc kubenswrapper[5114]: E1210 15:47:46.863294 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1983090-c631-42b8-889c-661e5120de50\\\",\\\"systemUUID\\\":\\\"ea4de44f-fffe-48de-b641-4c0ea71eb3ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.867021 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.867066 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.867079 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.867097 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.867109 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:46Z","lastTransitionTime":"2025-12-10T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:46 crc kubenswrapper[5114]: E1210 15:47:46.880558 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1983090-c631-42b8-889c-661e5120de50\\\",\\\"systemUUID\\\":\\\"ea4de44f-fffe-48de-b641-4c0ea71eb3ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.884442 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.884487 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.884500 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.884522 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.884541 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:46Z","lastTransitionTime":"2025-12-10T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:46 crc kubenswrapper[5114]: E1210 15:47:46.895065 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1983090-c631-42b8-889c-661e5120de50\\\",\\\"systemUUID\\\":\\\"ea4de44f-fffe-48de-b641-4c0ea71eb3ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:46 crc kubenswrapper[5114]: E1210 15:47:46.895187 5114 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.924782 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.924845 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.924859 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.924883 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:46 crc kubenswrapper[5114]: I1210 15:47:46.924900 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:46Z","lastTransitionTime":"2025-12-10T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.027213 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.027303 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.027327 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.027347 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.027357 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:47Z","lastTransitionTime":"2025-12-10T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.130095 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.130165 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.130178 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.130203 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.130218 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:47Z","lastTransitionTime":"2025-12-10T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.232497 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.232550 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.232564 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.232583 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.232597 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:47Z","lastTransitionTime":"2025-12-10T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.335349 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.335398 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.335410 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.335423 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.335432 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:47Z","lastTransitionTime":"2025-12-10T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.437519 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.437572 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.437587 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.437605 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.437618 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:47Z","lastTransitionTime":"2025-12-10T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.540251 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.540373 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.540399 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.540434 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.540459 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:47Z","lastTransitionTime":"2025-12-10T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.568355 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.568440 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.568355 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:47:47 crc kubenswrapper[5114]: E1210 15:47:47.568611 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjs2g" podUID="48d8f4a9-0b40-486c-ac70-597d1fab05c1" Dec 10 15:47:47 crc kubenswrapper[5114]: E1210 15:47:47.568744 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 10 15:47:47 crc kubenswrapper[5114]: E1210 15:47:47.568923 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 10 15:47:47 crc kubenswrapper[5114]: E1210 15:47:47.572139 5114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8g9ft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-pvhhc_openshift-machine-config-operator(b38ac556-07b2-4e25-9595-6adae4fcecb7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 10 15:47:47 crc kubenswrapper[5114]: E1210 15:47:47.576883 5114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8g9ft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-pvhhc_openshift-machine-config-operator(b38ac556-07b2-4e25-9595-6adae4fcecb7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 10 15:47:47 crc kubenswrapper[5114]: E1210 15:47:47.578232 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" podUID="b38ac556-07b2-4e25-9595-6adae4fcecb7" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.642679 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.642732 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.642745 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.642762 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.642774 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:47Z","lastTransitionTime":"2025-12-10T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.745127 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.745184 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.745205 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.745232 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.745251 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:47Z","lastTransitionTime":"2025-12-10T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.847795 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.847859 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.847880 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.847903 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.847921 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:47Z","lastTransitionTime":"2025-12-10T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.950428 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.950498 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.950517 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.950540 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:47 crc kubenswrapper[5114]: I1210 15:47:47.950557 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:47Z","lastTransitionTime":"2025-12-10T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.053656 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.053722 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.053747 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.053779 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.053803 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:48Z","lastTransitionTime":"2025-12-10T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.156487 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.156560 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.156582 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.156607 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.156625 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:48Z","lastTransitionTime":"2025-12-10T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.259667 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.259761 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.259807 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.259840 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.259865 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:48Z","lastTransitionTime":"2025-12-10T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.362881 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.362973 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.363015 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.363044 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.363067 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:48Z","lastTransitionTime":"2025-12-10T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.465991 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.466058 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.466069 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.466082 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.466110 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:48Z","lastTransitionTime":"2025-12-10T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.567902 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:47:48 crc kubenswrapper[5114]: E1210 15:47:48.568354 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.569694 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.569766 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.569794 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.569825 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.569850 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:48Z","lastTransitionTime":"2025-12-10T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:48 crc kubenswrapper[5114]: E1210 15:47:48.570731 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:48 crc kubenswrapper[5114]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Dec 10 15:47:48 crc kubenswrapper[5114]: set -o allexport Dec 10 15:47:48 crc kubenswrapper[5114]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 10 15:47:48 crc kubenswrapper[5114]: source /etc/kubernetes/apiserver-url.env Dec 10 15:47:48 crc kubenswrapper[5114]: else Dec 10 15:47:48 crc kubenswrapper[5114]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 10 15:47:48 crc kubenswrapper[5114]: exit 1 Dec 10 15:47:48 crc kubenswrapper[5114]: fi Dec 10 15:47:48 crc kubenswrapper[5114]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 10 15:47:48 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:48 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:48 crc kubenswrapper[5114]: E1210 15:47:48.571980 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Dec 10 15:47:48 crc kubenswrapper[5114]: E1210 15:47:48.574714 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:48 crc kubenswrapper[5114]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 10 15:47:48 crc kubenswrapper[5114]: apiVersion: v1 Dec 10 15:47:48 crc kubenswrapper[5114]: clusters: Dec 10 15:47:48 crc kubenswrapper[5114]: - cluster: Dec 10 15:47:48 crc kubenswrapper[5114]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 10 15:47:48 crc kubenswrapper[5114]: server: https://api-int.crc.testing:6443 Dec 10 15:47:48 crc kubenswrapper[5114]: name: default-cluster Dec 10 15:47:48 crc kubenswrapper[5114]: contexts: Dec 10 15:47:48 crc kubenswrapper[5114]: - context: Dec 10 15:47:48 crc kubenswrapper[5114]: cluster: default-cluster Dec 10 15:47:48 crc kubenswrapper[5114]: namespace: default Dec 10 15:47:48 crc kubenswrapper[5114]: user: default-auth Dec 10 15:47:48 crc kubenswrapper[5114]: name: default-context Dec 10 15:47:48 crc kubenswrapper[5114]: current-context: default-context Dec 10 15:47:48 crc kubenswrapper[5114]: kind: Config Dec 10 15:47:48 crc kubenswrapper[5114]: preferences: {} Dec 10 15:47:48 crc kubenswrapper[5114]: users: Dec 10 15:47:48 crc kubenswrapper[5114]: - name: default-auth Dec 10 15:47:48 crc kubenswrapper[5114]: user: Dec 10 15:47:48 crc kubenswrapper[5114]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 10 15:47:48 crc kubenswrapper[5114]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 10 15:47:48 crc kubenswrapper[5114]: EOF Dec 10 15:47:48 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xgklm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-bgfnl_openshift-ovn-kubernetes(5bef68a8-63de-4992-87b6-3dc6c70f5a1d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:48 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:48 crc kubenswrapper[5114]: E1210 15:47:48.575918 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.672706 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.672771 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.672789 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.672807 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.672819 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:48Z","lastTransitionTime":"2025-12-10T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.776351 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.776425 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.776436 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.776452 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.776463 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:48Z","lastTransitionTime":"2025-12-10T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.878679 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.878722 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.878731 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.878745 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.878754 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:48Z","lastTransitionTime":"2025-12-10T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.981967 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.982067 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.982086 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.982109 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:48 crc kubenswrapper[5114]: I1210 15:47:48.982129 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:48Z","lastTransitionTime":"2025-12-10T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.085073 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.085313 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.085355 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.085391 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.085416 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:49Z","lastTransitionTime":"2025-12-10T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.188119 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.188182 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.188204 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.188223 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.188235 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:49Z","lastTransitionTime":"2025-12-10T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.290790 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.290853 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.290866 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.290885 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.290899 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:49Z","lastTransitionTime":"2025-12-10T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.393117 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.393171 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.393184 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.393200 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.393212 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:49Z","lastTransitionTime":"2025-12-10T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.495148 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.495197 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.495209 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.495248 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.495259 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:49Z","lastTransitionTime":"2025-12-10T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.567868 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:47:49 crc kubenswrapper[5114]: E1210 15:47:49.568209 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjs2g" podUID="48d8f4a9-0b40-486c-ac70-597d1fab05c1" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.568310 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:47:49 crc kubenswrapper[5114]: E1210 15:47:49.568433 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.568479 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:47:49 crc kubenswrapper[5114]: E1210 15:47:49.568679 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 10 15:47:49 crc kubenswrapper[5114]: E1210 15:47:49.570242 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:49 crc kubenswrapper[5114]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Dec 10 15:47:49 crc kubenswrapper[5114]: set -euo pipefail Dec 10 15:47:49 crc kubenswrapper[5114]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Dec 10 15:47:49 crc kubenswrapper[5114]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Dec 10 15:47:49 crc kubenswrapper[5114]: # As the secret mount is optional we must wait for the files to be present. Dec 10 15:47:49 crc kubenswrapper[5114]: # The service is created in monitor.yaml and this is created in sdn.yaml. Dec 10 15:47:49 crc kubenswrapper[5114]: TS=$(date +%s) Dec 10 15:47:49 crc kubenswrapper[5114]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Dec 10 15:47:49 crc kubenswrapper[5114]: HAS_LOGGED_INFO=0 Dec 10 15:47:49 crc kubenswrapper[5114]: Dec 10 15:47:49 crc kubenswrapper[5114]: log_missing_certs(){ Dec 10 15:47:49 crc kubenswrapper[5114]: CUR_TS=$(date +%s) Dec 10 15:47:49 crc kubenswrapper[5114]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Dec 10 15:47:49 crc kubenswrapper[5114]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Dec 10 15:47:49 crc kubenswrapper[5114]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Dec 10 15:47:49 crc kubenswrapper[5114]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Dec 10 15:47:49 crc kubenswrapper[5114]: HAS_LOGGED_INFO=1 Dec 10 15:47:49 crc kubenswrapper[5114]: fi Dec 10 15:47:49 crc kubenswrapper[5114]: } Dec 10 15:47:49 crc kubenswrapper[5114]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Dec 10 15:47:49 crc kubenswrapper[5114]: log_missing_certs Dec 10 15:47:49 crc kubenswrapper[5114]: sleep 5 Dec 10 15:47:49 crc kubenswrapper[5114]: done Dec 10 15:47:49 crc kubenswrapper[5114]: Dec 10 15:47:49 crc kubenswrapper[5114]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Dec 10 15:47:49 crc kubenswrapper[5114]: exec /usr/bin/kube-rbac-proxy \ Dec 10 15:47:49 crc kubenswrapper[5114]: --logtostderr \ Dec 10 15:47:49 crc kubenswrapper[5114]: --secure-listen-address=:9108 \ Dec 10 15:47:49 crc kubenswrapper[5114]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Dec 10 15:47:49 crc kubenswrapper[5114]: --upstream=http://127.0.0.1:29108/ \ Dec 10 15:47:49 crc kubenswrapper[5114]: --tls-private-key-file=${TLS_PK} \ Dec 10 15:47:49 crc kubenswrapper[5114]: --tls-cert-file=${TLS_CERT} Dec 10 15:47:49 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zkm4v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-79jfj_openshift-ovn-kubernetes(89d5aad2-7968-4ff9-a9fa-50a133a77df8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:49 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:49 crc kubenswrapper[5114]: E1210 15:47:49.571654 5114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 10 15:47:49 crc kubenswrapper[5114]: E1210 15:47:49.572888 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:49 crc kubenswrapper[5114]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 10 15:47:49 crc kubenswrapper[5114]: if [[ -f "/env/_master" ]]; then Dec 10 15:47:49 crc kubenswrapper[5114]: set -o allexport Dec 10 15:47:49 crc kubenswrapper[5114]: source "/env/_master" Dec 10 15:47:49 crc kubenswrapper[5114]: set +o allexport Dec 10 15:47:49 crc kubenswrapper[5114]: fi Dec 10 15:47:49 crc kubenswrapper[5114]: Dec 10 15:47:49 crc kubenswrapper[5114]: ovn_v4_join_subnet_opt= Dec 10 15:47:49 crc kubenswrapper[5114]: if [[ "" != "" ]]; then Dec 10 15:47:49 crc kubenswrapper[5114]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Dec 10 15:47:49 crc kubenswrapper[5114]: fi Dec 10 15:47:49 crc kubenswrapper[5114]: ovn_v6_join_subnet_opt= Dec 10 15:47:49 crc kubenswrapper[5114]: if [[ "" != "" ]]; then Dec 10 15:47:49 crc kubenswrapper[5114]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Dec 10 15:47:49 crc kubenswrapper[5114]: fi Dec 10 15:47:49 crc kubenswrapper[5114]: Dec 10 15:47:49 crc kubenswrapper[5114]: ovn_v4_transit_switch_subnet_opt= Dec 10 15:47:49 crc kubenswrapper[5114]: if [[ "" != "" ]]; then Dec 10 15:47:49 crc kubenswrapper[5114]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Dec 10 15:47:49 crc kubenswrapper[5114]: fi Dec 10 15:47:49 crc kubenswrapper[5114]: ovn_v6_transit_switch_subnet_opt= Dec 10 15:47:49 crc kubenswrapper[5114]: if [[ "" != "" ]]; then Dec 10 15:47:49 crc kubenswrapper[5114]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Dec 10 15:47:49 crc kubenswrapper[5114]: fi Dec 10 15:47:49 crc kubenswrapper[5114]: Dec 10 15:47:49 crc kubenswrapper[5114]: dns_name_resolver_enabled_flag= Dec 10 15:47:49 crc kubenswrapper[5114]: if [[ "false" == "true" ]]; then Dec 10 15:47:49 crc kubenswrapper[5114]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Dec 10 15:47:49 crc kubenswrapper[5114]: fi Dec 10 15:47:49 crc kubenswrapper[5114]: Dec 10 15:47:49 crc kubenswrapper[5114]: persistent_ips_enabled_flag="--enable-persistent-ips" Dec 10 15:47:49 crc kubenswrapper[5114]: Dec 10 15:47:49 crc kubenswrapper[5114]: # This is needed so that converting clusters from GA to TP Dec 10 15:47:49 crc kubenswrapper[5114]: # will rollout control plane pods as well Dec 10 15:47:49 crc kubenswrapper[5114]: network_segmentation_enabled_flag= Dec 10 15:47:49 crc kubenswrapper[5114]: multi_network_enabled_flag= Dec 10 15:47:49 crc kubenswrapper[5114]: if [[ "true" == "true" ]]; then Dec 10 15:47:49 crc kubenswrapper[5114]: multi_network_enabled_flag="--enable-multi-network" Dec 10 15:47:49 crc kubenswrapper[5114]: fi Dec 10 15:47:49 crc kubenswrapper[5114]: if [[ "true" == "true" ]]; then Dec 10 15:47:49 crc kubenswrapper[5114]: if [[ "true" != "true" ]]; then Dec 10 15:47:49 crc kubenswrapper[5114]: multi_network_enabled_flag="--enable-multi-network" Dec 10 15:47:49 crc kubenswrapper[5114]: fi Dec 10 15:47:49 crc kubenswrapper[5114]: network_segmentation_enabled_flag="--enable-network-segmentation" Dec 10 15:47:49 crc kubenswrapper[5114]: fi Dec 10 15:47:49 crc kubenswrapper[5114]: Dec 10 15:47:49 crc kubenswrapper[5114]: route_advertisements_enable_flag= Dec 10 15:47:49 crc kubenswrapper[5114]: if [[ "false" == "true" ]]; then Dec 10 15:47:49 crc kubenswrapper[5114]: route_advertisements_enable_flag="--enable-route-advertisements" Dec 10 15:47:49 crc kubenswrapper[5114]: fi Dec 10 15:47:49 crc kubenswrapper[5114]: Dec 10 15:47:49 crc kubenswrapper[5114]: preconfigured_udn_addresses_enable_flag= Dec 10 15:47:49 crc kubenswrapper[5114]: if [[ "false" == "true" ]]; then Dec 10 15:47:49 crc kubenswrapper[5114]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Dec 10 15:47:49 crc kubenswrapper[5114]: fi Dec 10 15:47:49 crc kubenswrapper[5114]: Dec 10 15:47:49 crc kubenswrapper[5114]: # Enable multi-network policy if configured (control-plane always full mode) Dec 10 15:47:49 crc kubenswrapper[5114]: multi_network_policy_enabled_flag= Dec 10 15:47:49 crc kubenswrapper[5114]: if [[ "false" == "true" ]]; then Dec 10 15:47:49 crc kubenswrapper[5114]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Dec 10 15:47:49 crc kubenswrapper[5114]: fi Dec 10 15:47:49 crc kubenswrapper[5114]: Dec 10 15:47:49 crc kubenswrapper[5114]: # Enable admin network policy if configured (control-plane always full mode) Dec 10 15:47:49 crc kubenswrapper[5114]: admin_network_policy_enabled_flag= Dec 10 15:47:49 crc kubenswrapper[5114]: if [[ "true" == "true" ]]; then Dec 10 15:47:49 crc kubenswrapper[5114]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Dec 10 15:47:49 crc kubenswrapper[5114]: fi Dec 10 15:47:49 crc kubenswrapper[5114]: Dec 10 15:47:49 crc kubenswrapper[5114]: if [ "shared" == "shared" ]; then Dec 10 15:47:49 crc kubenswrapper[5114]: gateway_mode_flags="--gateway-mode shared" Dec 10 15:47:49 crc kubenswrapper[5114]: elif [ "shared" == "local" ]; then Dec 10 15:47:49 crc kubenswrapper[5114]: gateway_mode_flags="--gateway-mode local" Dec 10 15:47:49 crc kubenswrapper[5114]: else Dec 10 15:47:49 crc kubenswrapper[5114]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Dec 10 15:47:49 crc kubenswrapper[5114]: exit 1 Dec 10 15:47:49 crc kubenswrapper[5114]: fi Dec 10 15:47:49 crc kubenswrapper[5114]: Dec 10 15:47:49 crc kubenswrapper[5114]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Dec 10 15:47:49 crc kubenswrapper[5114]: exec /usr/bin/ovnkube \ Dec 10 15:47:49 crc kubenswrapper[5114]: --enable-interconnect \ Dec 10 15:47:49 crc kubenswrapper[5114]: --init-cluster-manager "${K8S_NODE}" \ Dec 10 15:47:49 crc kubenswrapper[5114]: --config-file=/run/ovnkube-config/ovnkube.conf \ Dec 10 15:47:49 crc kubenswrapper[5114]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Dec 10 15:47:49 crc kubenswrapper[5114]: --metrics-bind-address "127.0.0.1:29108" \ Dec 10 15:47:49 crc kubenswrapper[5114]: --metrics-enable-pprof \ Dec 10 15:47:49 crc kubenswrapper[5114]: --metrics-enable-config-duration \ Dec 10 15:47:49 crc kubenswrapper[5114]: ${ovn_v4_join_subnet_opt} \ Dec 10 15:47:49 crc kubenswrapper[5114]: ${ovn_v6_join_subnet_opt} \ Dec 10 15:47:49 crc kubenswrapper[5114]: ${ovn_v4_transit_switch_subnet_opt} \ Dec 10 15:47:49 crc kubenswrapper[5114]: ${ovn_v6_transit_switch_subnet_opt} \ Dec 10 15:47:49 crc kubenswrapper[5114]: ${dns_name_resolver_enabled_flag} \ Dec 10 15:47:49 crc kubenswrapper[5114]: ${persistent_ips_enabled_flag} \ Dec 10 15:47:49 crc kubenswrapper[5114]: ${multi_network_enabled_flag} \ Dec 10 15:47:49 crc kubenswrapper[5114]: ${network_segmentation_enabled_flag} \ Dec 10 15:47:49 crc kubenswrapper[5114]: ${gateway_mode_flags} \ Dec 10 15:47:49 crc kubenswrapper[5114]: ${route_advertisements_enable_flag} \ Dec 10 15:47:49 crc kubenswrapper[5114]: ${preconfigured_udn_addresses_enable_flag} \ Dec 10 15:47:49 crc kubenswrapper[5114]: --enable-egress-ip=true \ Dec 10 15:47:49 crc kubenswrapper[5114]: --enable-egress-firewall=true \ Dec 10 15:47:49 crc kubenswrapper[5114]: --enable-egress-qos=true \ Dec 10 15:47:49 crc kubenswrapper[5114]: --enable-egress-service=true \ Dec 10 15:47:49 crc kubenswrapper[5114]: --enable-multicast \ Dec 10 15:47:49 crc kubenswrapper[5114]: --enable-multi-external-gateway=true \ Dec 10 15:47:49 crc kubenswrapper[5114]: ${multi_network_policy_enabled_flag} \ Dec 10 15:47:49 crc kubenswrapper[5114]: ${admin_network_policy_enabled_flag} Dec 10 15:47:49 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zkm4v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-79jfj_openshift-ovn-kubernetes(89d5aad2-7968-4ff9-a9fa-50a133a77df8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:49 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:49 crc kubenswrapper[5114]: E1210 15:47:49.574011 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" podUID="89d5aad2-7968-4ff9-a9fa-50a133a77df8" Dec 10 15:47:49 crc kubenswrapper[5114]: E1210 15:47:49.574046 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.598992 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.599062 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.599085 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.599110 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.599126 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:49Z","lastTransitionTime":"2025-12-10T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.701118 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.701184 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.701194 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.701215 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.701229 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:49Z","lastTransitionTime":"2025-12-10T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.803823 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.803891 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.803909 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.803932 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.803952 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:49Z","lastTransitionTime":"2025-12-10T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.906242 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.906343 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.906365 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.906390 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:49 crc kubenswrapper[5114]: I1210 15:47:49.906406 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:49Z","lastTransitionTime":"2025-12-10T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.008090 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.008137 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.008148 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.008162 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.008172 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:50Z","lastTransitionTime":"2025-12-10T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.111050 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.111147 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.111167 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.111191 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.111209 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:50Z","lastTransitionTime":"2025-12-10T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.213668 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.213705 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.213717 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.213731 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.213742 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:50Z","lastTransitionTime":"2025-12-10T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.315968 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.316012 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.316023 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.316038 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.316046 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:50Z","lastTransitionTime":"2025-12-10T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.418703 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.418742 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.418756 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.418771 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.418782 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:50Z","lastTransitionTime":"2025-12-10T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.521260 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.521330 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.521345 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.521365 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.521379 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:50Z","lastTransitionTime":"2025-12-10T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.568361 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:47:50 crc kubenswrapper[5114]: E1210 15:47:50.568527 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 10 15:47:50 crc kubenswrapper[5114]: E1210 15:47:50.570701 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:50 crc kubenswrapper[5114]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Dec 10 15:47:50 crc kubenswrapper[5114]: set -uo pipefail Dec 10 15:47:50 crc kubenswrapper[5114]: Dec 10 15:47:50 crc kubenswrapper[5114]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 10 15:47:50 crc kubenswrapper[5114]: Dec 10 15:47:50 crc kubenswrapper[5114]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 10 15:47:50 crc kubenswrapper[5114]: HOSTS_FILE="/etc/hosts" Dec 10 15:47:50 crc kubenswrapper[5114]: TEMP_FILE="/tmp/hosts.tmp" Dec 10 15:47:50 crc kubenswrapper[5114]: Dec 10 15:47:50 crc kubenswrapper[5114]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 10 15:47:50 crc kubenswrapper[5114]: Dec 10 15:47:50 crc kubenswrapper[5114]: # Make a temporary file with the old hosts file's attributes. Dec 10 15:47:50 crc kubenswrapper[5114]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 10 15:47:50 crc kubenswrapper[5114]: echo "Failed to preserve hosts file. Exiting." Dec 10 15:47:50 crc kubenswrapper[5114]: exit 1 Dec 10 15:47:50 crc kubenswrapper[5114]: fi Dec 10 15:47:50 crc kubenswrapper[5114]: Dec 10 15:47:50 crc kubenswrapper[5114]: while true; do Dec 10 15:47:50 crc kubenswrapper[5114]: declare -A svc_ips Dec 10 15:47:50 crc kubenswrapper[5114]: for svc in "${services[@]}"; do Dec 10 15:47:50 crc kubenswrapper[5114]: # Fetch service IP from cluster dns if present. We make several tries Dec 10 15:47:50 crc kubenswrapper[5114]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 10 15:47:50 crc kubenswrapper[5114]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 10 15:47:50 crc kubenswrapper[5114]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 10 15:47:50 crc kubenswrapper[5114]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 10 15:47:50 crc kubenswrapper[5114]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 10 15:47:50 crc kubenswrapper[5114]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 10 15:47:50 crc kubenswrapper[5114]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 10 15:47:50 crc kubenswrapper[5114]: for i in ${!cmds[*]} Dec 10 15:47:50 crc kubenswrapper[5114]: do Dec 10 15:47:50 crc kubenswrapper[5114]: ips=($(eval "${cmds[i]}")) Dec 10 15:47:50 crc kubenswrapper[5114]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 10 15:47:50 crc kubenswrapper[5114]: svc_ips["${svc}"]="${ips[@]}" Dec 10 15:47:50 crc kubenswrapper[5114]: break Dec 10 15:47:50 crc kubenswrapper[5114]: fi Dec 10 15:47:50 crc kubenswrapper[5114]: done Dec 10 15:47:50 crc kubenswrapper[5114]: done Dec 10 15:47:50 crc kubenswrapper[5114]: Dec 10 15:47:50 crc kubenswrapper[5114]: # Update /etc/hosts only if we get valid service IPs Dec 10 15:47:50 crc kubenswrapper[5114]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 10 15:47:50 crc kubenswrapper[5114]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 10 15:47:50 crc kubenswrapper[5114]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 10 15:47:50 crc kubenswrapper[5114]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 10 15:47:50 crc kubenswrapper[5114]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 10 15:47:50 crc kubenswrapper[5114]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 10 15:47:50 crc kubenswrapper[5114]: sleep 60 & wait Dec 10 15:47:50 crc kubenswrapper[5114]: continue Dec 10 15:47:50 crc kubenswrapper[5114]: fi Dec 10 15:47:50 crc kubenswrapper[5114]: Dec 10 15:47:50 crc kubenswrapper[5114]: # Append resolver entries for services Dec 10 15:47:50 crc kubenswrapper[5114]: rc=0 Dec 10 15:47:50 crc kubenswrapper[5114]: for svc in "${!svc_ips[@]}"; do Dec 10 15:47:50 crc kubenswrapper[5114]: for ip in ${svc_ips[${svc}]}; do Dec 10 15:47:50 crc kubenswrapper[5114]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 10 15:47:50 crc kubenswrapper[5114]: done Dec 10 15:47:50 crc kubenswrapper[5114]: done Dec 10 15:47:50 crc kubenswrapper[5114]: if [[ $rc -ne 0 ]]; then Dec 10 15:47:50 crc kubenswrapper[5114]: sleep 60 & wait Dec 10 15:47:50 crc kubenswrapper[5114]: continue Dec 10 15:47:50 crc kubenswrapper[5114]: fi Dec 10 15:47:50 crc kubenswrapper[5114]: Dec 10 15:47:50 crc kubenswrapper[5114]: Dec 10 15:47:50 crc kubenswrapper[5114]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 10 15:47:50 crc kubenswrapper[5114]: # Replace /etc/hosts with our modified version if needed Dec 10 15:47:50 crc kubenswrapper[5114]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 10 15:47:50 crc kubenswrapper[5114]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 10 15:47:50 crc kubenswrapper[5114]: fi Dec 10 15:47:50 crc kubenswrapper[5114]: sleep 60 & wait Dec 10 15:47:50 crc kubenswrapper[5114]: unset svc_ips Dec 10 15:47:50 crc kubenswrapper[5114]: done Dec 10 15:47:50 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j2wz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-49rgv_openshift-dns(379e5b28-21b4-4727-a60f-0fad71bf89fa): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:50 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:50 crc kubenswrapper[5114]: E1210 15:47:50.571773 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-49rgv" podUID="379e5b28-21b4-4727-a60f-0fad71bf89fa" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.623826 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.623911 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.623958 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.623978 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.623992 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:50Z","lastTransitionTime":"2025-12-10T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.726538 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.726578 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.726587 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.726606 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.726615 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:50Z","lastTransitionTime":"2025-12-10T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.828677 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.828710 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.828718 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.828739 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.828748 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:50Z","lastTransitionTime":"2025-12-10T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.930716 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.930806 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.930830 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.930861 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:50 crc kubenswrapper[5114]: I1210 15:47:50.930885 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:50Z","lastTransitionTime":"2025-12-10T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.034234 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.034315 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.034330 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.034349 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.034361 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:51Z","lastTransitionTime":"2025-12-10T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.136285 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.136358 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.136390 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.136408 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.136420 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:51Z","lastTransitionTime":"2025-12-10T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.238814 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.238864 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.238877 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.238893 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.238905 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:51Z","lastTransitionTime":"2025-12-10T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.341758 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.341802 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.341811 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.341825 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.341834 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:51Z","lastTransitionTime":"2025-12-10T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.444464 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.444535 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.444557 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.444581 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.444596 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:51Z","lastTransitionTime":"2025-12-10T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.546765 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.546834 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.546849 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.546910 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.546923 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:51Z","lastTransitionTime":"2025-12-10T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.568170 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.568467 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.568549 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:47:51 crc kubenswrapper[5114]: E1210 15:47:51.569146 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjs2g" podUID="48d8f4a9-0b40-486c-ac70-597d1fab05c1" Dec 10 15:47:51 crc kubenswrapper[5114]: E1210 15:47:51.569354 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 10 15:47:51 crc kubenswrapper[5114]: E1210 15:47:51.570314 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:51 crc kubenswrapper[5114]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Dec 10 15:47:51 crc kubenswrapper[5114]: while [ true ]; Dec 10 15:47:51 crc kubenswrapper[5114]: do Dec 10 15:47:51 crc kubenswrapper[5114]: for f in $(ls /tmp/serviceca); do Dec 10 15:47:51 crc kubenswrapper[5114]: echo $f Dec 10 15:47:51 crc kubenswrapper[5114]: ca_file_path="/tmp/serviceca/${f}" Dec 10 15:47:51 crc kubenswrapper[5114]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Dec 10 15:47:51 crc kubenswrapper[5114]: reg_dir_path="/etc/docker/certs.d/${f}" Dec 10 15:47:51 crc kubenswrapper[5114]: if [ -e "${reg_dir_path}" ]; then Dec 10 15:47:51 crc kubenswrapper[5114]: cp -u $ca_file_path $reg_dir_path/ca.crt Dec 10 15:47:51 crc kubenswrapper[5114]: else Dec 10 15:47:51 crc kubenswrapper[5114]: mkdir $reg_dir_path Dec 10 15:47:51 crc kubenswrapper[5114]: cp $ca_file_path $reg_dir_path/ca.crt Dec 10 15:47:51 crc kubenswrapper[5114]: fi Dec 10 15:47:51 crc kubenswrapper[5114]: done Dec 10 15:47:51 crc kubenswrapper[5114]: for d in $(ls /etc/docker/certs.d); do Dec 10 15:47:51 crc kubenswrapper[5114]: echo $d Dec 10 15:47:51 crc kubenswrapper[5114]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Dec 10 15:47:51 crc kubenswrapper[5114]: reg_conf_path="/tmp/serviceca/${dp}" Dec 10 15:47:51 crc kubenswrapper[5114]: if [ ! -e "${reg_conf_path}" ]; then Dec 10 15:47:51 crc kubenswrapper[5114]: rm -rf /etc/docker/certs.d/$d Dec 10 15:47:51 crc kubenswrapper[5114]: fi Dec 10 15:47:51 crc kubenswrapper[5114]: done Dec 10 15:47:51 crc kubenswrapper[5114]: sleep 60 & wait ${!} Dec 10 15:47:51 crc kubenswrapper[5114]: done Dec 10 15:47:51 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xl62h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-sg27x_openshift-image-registry(a54715ec-382b-4bb8-bef2-f125ee0bae2b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:51 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:51 crc kubenswrapper[5114]: E1210 15:47:51.570902 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 10 15:47:51 crc kubenswrapper[5114]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Dec 10 15:47:51 crc kubenswrapper[5114]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Dec 10 15:47:51 crc kubenswrapper[5114]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sfxbp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-lg6m5_openshift-multus(e7c683ba-536f-45e5-89b0-fe14989cad13): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 10 15:47:51 crc kubenswrapper[5114]: > logger="UnhandledError" Dec 10 15:47:51 crc kubenswrapper[5114]: E1210 15:47:51.571093 5114 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j9xxc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-wbl48_openshift-multus(3a3e165c-439d-4282-b1e7-179dca439343): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 10 15:47:51 crc kubenswrapper[5114]: E1210 15:47:51.571348 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 10 15:47:51 crc kubenswrapper[5114]: E1210 15:47:51.571446 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-sg27x" podUID="a54715ec-382b-4bb8-bef2-f125ee0bae2b" Dec 10 15:47:51 crc kubenswrapper[5114]: E1210 15:47:51.572610 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-wbl48" podUID="3a3e165c-439d-4282-b1e7-179dca439343" Dec 10 15:47:51 crc kubenswrapper[5114]: E1210 15:47:51.572652 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-lg6m5" podUID="e7c683ba-536f-45e5-89b0-fe14989cad13" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.649410 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.649460 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.649472 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.649488 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.649502 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:51Z","lastTransitionTime":"2025-12-10T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.752193 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.752256 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.752280 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.752299 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.752309 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:51Z","lastTransitionTime":"2025-12-10T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.855698 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.856001 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.856094 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.856215 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.856302 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:51Z","lastTransitionTime":"2025-12-10T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.958937 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.959038 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.959053 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.959068 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:51 crc kubenswrapper[5114]: I1210 15:47:51.959077 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:51Z","lastTransitionTime":"2025-12-10T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.061207 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.061263 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.061336 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.061363 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.061400 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:52Z","lastTransitionTime":"2025-12-10T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.163692 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.163753 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.163770 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.163790 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.163803 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:52Z","lastTransitionTime":"2025-12-10T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.265538 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.265590 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.265601 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.265618 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.265631 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:52Z","lastTransitionTime":"2025-12-10T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.368003 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.368057 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.368070 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.368087 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.368101 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:52Z","lastTransitionTime":"2025-12-10T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.470368 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.470428 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.470448 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.470468 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.470484 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:52Z","lastTransitionTime":"2025-12-10T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.568185 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:47:52 crc kubenswrapper[5114]: E1210 15:47:52.568650 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.573099 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.573147 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.573167 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.573189 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.573208 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:52Z","lastTransitionTime":"2025-12-10T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.676744 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.676822 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.676841 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.676862 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.676874 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:52Z","lastTransitionTime":"2025-12-10T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.778844 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.778894 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.778908 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.778926 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.778939 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:52Z","lastTransitionTime":"2025-12-10T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.881440 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.881529 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.881562 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.881595 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.881661 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:52Z","lastTransitionTime":"2025-12-10T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.983687 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.983732 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.983741 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.983758 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:52 crc kubenswrapper[5114]: I1210 15:47:52.983770 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:52Z","lastTransitionTime":"2025-12-10T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.085966 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.086010 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.086019 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.086033 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.086043 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:53Z","lastTransitionTime":"2025-12-10T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.188625 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.188677 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.188689 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.188706 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.188720 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:53Z","lastTransitionTime":"2025-12-10T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.290775 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.290943 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.290966 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.290990 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.291008 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:53Z","lastTransitionTime":"2025-12-10T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.343951 5114 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.393612 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.393685 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.393699 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.393715 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.393730 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:53Z","lastTransitionTime":"2025-12-10T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.447022 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:47:53 crc kubenswrapper[5114]: E1210 15:47:53.447260 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:25.447228451 +0000 UTC m=+131.168029638 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.447367 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.447590 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:47:53 crc kubenswrapper[5114]: E1210 15:47:53.447666 5114 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 15:47:53 crc kubenswrapper[5114]: E1210 15:47:53.447738 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-10 15:48:25.447721714 +0000 UTC m=+131.168522901 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 15:47:53 crc kubenswrapper[5114]: E1210 15:47:53.447766 5114 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 15:47:53 crc kubenswrapper[5114]: E1210 15:47:53.447890 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-10 15:48:25.447856777 +0000 UTC m=+131.168657984 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.495554 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.495600 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.495615 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.495669 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.495686 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:53Z","lastTransitionTime":"2025-12-10T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.548727 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48d8f4a9-0b40-486c-ac70-597d1fab05c1-metrics-certs\") pod \"network-metrics-daemon-gjs2g\" (UID: \"48d8f4a9-0b40-486c-ac70-597d1fab05c1\") " pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.548817 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.548874 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:47:53 crc kubenswrapper[5114]: E1210 15:47:53.548970 5114 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 15:47:53 crc kubenswrapper[5114]: E1210 15:47:53.549015 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 15:47:53 crc kubenswrapper[5114]: E1210 15:47:53.549040 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 15:47:53 crc kubenswrapper[5114]: E1210 15:47:53.549042 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 15:47:53 crc kubenswrapper[5114]: E1210 15:47:53.549094 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 15:47:53 crc kubenswrapper[5114]: E1210 15:47:53.549108 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48d8f4a9-0b40-486c-ac70-597d1fab05c1-metrics-certs podName:48d8f4a9-0b40-486c-ac70-597d1fab05c1 nodeName:}" failed. No retries permitted until 2025-12-10 15:48:25.549069322 +0000 UTC m=+131.269870549 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/48d8f4a9-0b40-486c-ac70-597d1fab05c1-metrics-certs") pod "network-metrics-daemon-gjs2g" (UID: "48d8f4a9-0b40-486c-ac70-597d1fab05c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 15:47:53 crc kubenswrapper[5114]: E1210 15:47:53.549122 5114 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:47:53 crc kubenswrapper[5114]: E1210 15:47:53.549055 5114 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:47:53 crc kubenswrapper[5114]: E1210 15:47:53.549227 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-10 15:48:25.549194636 +0000 UTC m=+131.269995873 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:47:53 crc kubenswrapper[5114]: E1210 15:47:53.549325 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-10 15:48:25.549259227 +0000 UTC m=+131.270060434 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.567900 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:47:53 crc kubenswrapper[5114]: E1210 15:47:53.568128 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.568264 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.568319 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:47:53 crc kubenswrapper[5114]: E1210 15:47:53.568468 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 10 15:47:53 crc kubenswrapper[5114]: E1210 15:47:53.568499 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjs2g" podUID="48d8f4a9-0b40-486c-ac70-597d1fab05c1" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.598899 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.598967 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.598994 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.599055 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.599082 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:53Z","lastTransitionTime":"2025-12-10T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.701739 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.701797 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.701814 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.701834 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.701848 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:53Z","lastTransitionTime":"2025-12-10T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.804371 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.804441 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.804458 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.804482 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.804503 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:53Z","lastTransitionTime":"2025-12-10T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.907366 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.907410 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.907423 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.907438 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:53 crc kubenswrapper[5114]: I1210 15:47:53.907449 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:53Z","lastTransitionTime":"2025-12-10T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.010331 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.010407 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.010608 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.010637 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.010658 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:54Z","lastTransitionTime":"2025-12-10T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.114080 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.114129 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.114141 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.114162 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.114174 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:54Z","lastTransitionTime":"2025-12-10T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.216697 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.216772 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.216795 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.216821 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.216841 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:54Z","lastTransitionTime":"2025-12-10T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.319728 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.319786 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.319798 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.319816 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.319831 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:54Z","lastTransitionTime":"2025-12-10T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.422210 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.422407 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.422433 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.422462 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.422484 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:54Z","lastTransitionTime":"2025-12-10T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.525167 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.525225 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.525246 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.525266 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.525314 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:54Z","lastTransitionTime":"2025-12-10T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.568110 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:47:54 crc kubenswrapper[5114]: E1210 15:47:54.568380 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.598695 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14d2b4c9-40f0-4dcb-ad8c-0fe4a5304563\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://85e77e659fccf9ba6e2cc6e99afbafd6be1703e401429ba871243247e0c20a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://447746eb6e190728d80f154f34d6c4c3cd6a364d95c18a4c109e1a2d00fbcf27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://251a7ed18067c8bcbcbcb38700fe905a2a4ebf34fef9f02a6ffc9f78a334bc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43234809c1296bc87d3909492e145b0720e62cf92728f1f24baeac176f8cfc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://4654b1e58183f9508823b58dc37a09482feafd97c887cc56f9d1c793999ee516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://101e3958feb79a37918d043f01289b15aa43519052915151289b2df11a4c798e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://101e3958feb79a37918d043f01289b15aa43519052915151289b2df11a4c798e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://000c0ac3fe264d2edae20d00ae4b904a9c638f104925be4c2999a32625c2384e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://000c0ac3fe264d2edae20d00ae4b904a9c638f104925be4c2999a32625c2384e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://90da8daaae30e60295160aefe8748f6cf28eda2cd17d933569c0320aebc57f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90da8daaae30e60295160aefe8748f6cf28eda2cd17d933569c0320aebc57f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.622014 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e331166d-a33f-44c1-9a3e-f43cfee598a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://c9a7475ba48862dfcb11fe65264384be264b4b7acd30761bc650e70dd3a78abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7398b71862f7cfabefc5644c5d6b4924bbde47edadad7f240aa37599d2b3da9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://55ad03eb1a337191c414a5dbd0864a29632396ff234b68505a9a4b65c90d8eb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d79fc0ad78427693b9ef01519261c475c49b29ab8dc64210c09f22886b3dcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1c010c37667d5c045e43048e4405a03d43afd6ebe7774038d9d5a5c5bb8aaf4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T15:47:00Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1210 15:46:59.465586 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 15:46:59.465755 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1210 15:46:59.466800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823188907/tls.crt::/tmp/serving-cert-3823188907/tls.key\\\\\\\"\\\\nI1210 15:47:00.080067 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 15:47:00.081594 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 15:47:00.081609 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 15:47:00.081631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 15:47:00.081635 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 15:47:00.084952 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 15:47:00.084970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 15:47:00.084974 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 15:47:00.084979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 15:47:00.084982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 15:47:00.084984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 15:47:00.084987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 15:47:00.085095 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 15:47:00.088454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:47:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f8dd78b836cacc6ac7bee1a11730500c94192df5a045eb37ae1c137a3cc0ad6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7e3d3b6b0e188659783d2b384d22a05ba8962e4fa49cd4caae040921c9add613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e3d3b6b0e188659783d2b384d22a05ba8962e4fa49cd4caae040921c9add613\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.628012 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.628054 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.628063 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.628077 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.628086 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:54Z","lastTransitionTime":"2025-12-10T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.639497 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.649824 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.659626 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.671390 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbl48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3e165c-439d-4282-b1e7-179dca439343\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbl48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.680417 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cddacc92-81b7-4948-93c5-5c47e15a9d41\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://82cf7cb8d12a0390623c03e2a919f8f30da8ac13d60bbaaca7bd32778e9816e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8822b68284631476f7526c5a6629b3cbe113320b8716837d4be7ed679ea64b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d65e5ca10eda1aed2b331dff87ea726c9ba50cfbb47bf07c74e0ce4d6d5b99b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bf99e2dd5c01828fb3db803c3d59c571d32f320bec0325579c1510965bea01ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf99e2dd5c01828fb3db803c3d59c571d32f320bec0325579c1510965bea01ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.689847 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.697244 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b38ac556-07b2-4e25-9595-6adae4fcecb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g9ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g9ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pvhhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.704794 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-lg6m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7c683ba-536f-45e5-89b0-fe14989cad13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfxbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lg6m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.712236 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gjs2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d8f4a9-0b40-486c-ac70-597d1fab05c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtlfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtlfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gjs2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.718380 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-49rgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"379e5b28-21b4-4727-a60f-0fad71bf89fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2wz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-49rgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.725027 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sg27x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a54715ec-382b-4bb8-bef2-f125ee0bae2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl62h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sg27x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.729774 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.729812 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.729823 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.729837 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.729847 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:54Z","lastTransitionTime":"2025-12-10T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.732222 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89d5aad2-7968-4ff9-a9fa-50a133a77df8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkm4v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkm4v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-79jfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.741236 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fa5e9e-e71a-458f-88e7-57d296462452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b63509d96fe3793fb1dffe2943da9a38a875dd373fbad85638d39878168af249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://108af1094b4ecac73d954933b32171f5e697d11d78490d831db63f315177de7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://108af1094b4ecac73d954933b32171f5e697d11d78490d831db63f315177de7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.752552 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.761907 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.778183 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bgfnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.788336 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4f07611-baa7-42a7-8607-306ed57fb75c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://800d1520c7107344f8b6d771d0fecfb9ca2644d8efe597cabd69c5de72a571ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ec7a41d072aa02f59def36f4c2802872ef70cbd48046c3e3d6f6ccd6b254c53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c19e0260e8980b12b59f394a8355cee2eee1dc159e14081a0ff23cebdd4e9f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1daca1262ac174a242cff74011ab4da1c00a8caaf4bc44b58af5400ae24d3226\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.831812 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.831858 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.831874 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.831890 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.831899 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:54Z","lastTransitionTime":"2025-12-10T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.934541 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.934607 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.934654 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.934735 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:54 crc kubenswrapper[5114]: I1210 15:47:54.934795 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:54Z","lastTransitionTime":"2025-12-10T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.037514 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.037578 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.037598 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.037620 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.037636 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:55Z","lastTransitionTime":"2025-12-10T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.139664 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.139717 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.139728 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.139746 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.139756 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:55Z","lastTransitionTime":"2025-12-10T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.242186 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.242233 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.242245 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.242262 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.242287 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:55Z","lastTransitionTime":"2025-12-10T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.344537 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.344593 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.344608 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.344630 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.344645 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:55Z","lastTransitionTime":"2025-12-10T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.447789 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.447845 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.447863 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.447884 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.447902 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:55Z","lastTransitionTime":"2025-12-10T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.550851 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.550890 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.550900 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.550914 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.550924 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:55Z","lastTransitionTime":"2025-12-10T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.567868 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.567874 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:47:55 crc kubenswrapper[5114]: E1210 15:47:55.568237 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.567932 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:47:55 crc kubenswrapper[5114]: E1210 15:47:55.568357 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjs2g" podUID="48d8f4a9-0b40-486c-ac70-597d1fab05c1" Dec 10 15:47:55 crc kubenswrapper[5114]: E1210 15:47:55.568541 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.653375 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.653430 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.653443 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.653460 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.653471 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:55Z","lastTransitionTime":"2025-12-10T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.756179 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.756233 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.756246 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.756264 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.756296 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:55Z","lastTransitionTime":"2025-12-10T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.858212 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.858293 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.858313 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.858334 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.858348 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:55Z","lastTransitionTime":"2025-12-10T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.960230 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.960474 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.960514 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.960548 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:55 crc kubenswrapper[5114]: I1210 15:47:55.960573 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:55Z","lastTransitionTime":"2025-12-10T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.062825 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.062918 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.062954 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.062988 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.063012 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:56Z","lastTransitionTime":"2025-12-10T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.165226 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.165353 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.165406 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.165445 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.165464 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:56Z","lastTransitionTime":"2025-12-10T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.267616 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.267670 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.267685 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.267702 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.267714 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:56Z","lastTransitionTime":"2025-12-10T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.370745 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.370791 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.370803 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.370824 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.370838 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:56Z","lastTransitionTime":"2025-12-10T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.473121 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.473166 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.473179 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.473196 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.473208 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:56Z","lastTransitionTime":"2025-12-10T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.568885 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:47:56 crc kubenswrapper[5114]: E1210 15:47:56.569054 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.576244 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.576316 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.576329 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.576342 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.576353 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:56Z","lastTransitionTime":"2025-12-10T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.678813 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.678892 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.678912 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.678941 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.678965 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:56Z","lastTransitionTime":"2025-12-10T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.782408 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.782458 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.782467 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.782485 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.782497 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:56Z","lastTransitionTime":"2025-12-10T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.883804 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.883846 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.883856 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.883870 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.883879 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:56Z","lastTransitionTime":"2025-12-10T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.986295 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.986346 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.986358 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.986375 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:56 crc kubenswrapper[5114]: I1210 15:47:56.986386 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:56Z","lastTransitionTime":"2025-12-10T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.088763 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.088808 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.088818 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.088832 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.088842 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:57Z","lastTransitionTime":"2025-12-10T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.129620 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.129676 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.129690 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.129706 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.129719 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:57Z","lastTransitionTime":"2025-12-10T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:57 crc kubenswrapper[5114]: E1210 15:47:57.139169 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1983090-c631-42b8-889c-661e5120de50\\\",\\\"systemUUID\\\":\\\"ea4de44f-fffe-48de-b641-4c0ea71eb3ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.142497 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.142555 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.142571 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.142592 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.142608 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:57Z","lastTransitionTime":"2025-12-10T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:57 crc kubenswrapper[5114]: E1210 15:47:57.152899 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1983090-c631-42b8-889c-661e5120de50\\\",\\\"systemUUID\\\":\\\"ea4de44f-fffe-48de-b641-4c0ea71eb3ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.156199 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.156225 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.156233 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.156245 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.156254 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:57Z","lastTransitionTime":"2025-12-10T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:57 crc kubenswrapper[5114]: E1210 15:47:57.188738 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1983090-c631-42b8-889c-661e5120de50\\\",\\\"systemUUID\\\":\\\"ea4de44f-fffe-48de-b641-4c0ea71eb3ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.192937 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.193000 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.193016 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.193037 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.193052 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:57Z","lastTransitionTime":"2025-12-10T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:57 crc kubenswrapper[5114]: E1210 15:47:57.210044 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1983090-c631-42b8-889c-661e5120de50\\\",\\\"systemUUID\\\":\\\"ea4de44f-fffe-48de-b641-4c0ea71eb3ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.216567 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.216610 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.216622 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.216637 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.216649 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:57Z","lastTransitionTime":"2025-12-10T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:57 crc kubenswrapper[5114]: E1210 15:47:57.225602 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1983090-c631-42b8-889c-661e5120de50\\\",\\\"systemUUID\\\":\\\"ea4de44f-fffe-48de-b641-4c0ea71eb3ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:57 crc kubenswrapper[5114]: E1210 15:47:57.225766 5114 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.227044 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.227082 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.227093 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.227107 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.227116 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:57Z","lastTransitionTime":"2025-12-10T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.329519 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.329583 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.329601 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.329624 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.329644 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:57Z","lastTransitionTime":"2025-12-10T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.432395 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.432493 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.432529 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.432560 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.432583 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:57Z","lastTransitionTime":"2025-12-10T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.535742 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.535838 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.535865 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.535897 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.535921 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:57Z","lastTransitionTime":"2025-12-10T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.568449 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.568449 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:47:57 crc kubenswrapper[5114]: E1210 15:47:57.568677 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.568450 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:47:57 crc kubenswrapper[5114]: E1210 15:47:57.568796 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 10 15:47:57 crc kubenswrapper[5114]: E1210 15:47:57.568992 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjs2g" podUID="48d8f4a9-0b40-486c-ac70-597d1fab05c1" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.639076 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.639145 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.639160 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.639180 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.639192 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:57Z","lastTransitionTime":"2025-12-10T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.741333 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.741378 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.741390 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.741405 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.741419 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:57Z","lastTransitionTime":"2025-12-10T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.844179 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.844324 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.844335 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.844351 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.844363 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:57Z","lastTransitionTime":"2025-12-10T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.946541 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.946684 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.946710 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.946747 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:57 crc kubenswrapper[5114]: I1210 15:47:57.946791 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:57Z","lastTransitionTime":"2025-12-10T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.048861 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.048914 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.048924 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.048942 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.048952 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:58Z","lastTransitionTime":"2025-12-10T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.152049 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.152112 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.152131 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.152157 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.152175 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:58Z","lastTransitionTime":"2025-12-10T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.254062 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.254123 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.254135 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.254152 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.254164 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:58Z","lastTransitionTime":"2025-12-10T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.356841 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.356939 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.356968 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.356998 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.357026 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:58Z","lastTransitionTime":"2025-12-10T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.459656 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.459717 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.459728 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.459744 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.459755 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:58Z","lastTransitionTime":"2025-12-10T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.562832 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.562888 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.562909 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.562928 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.562947 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:58Z","lastTransitionTime":"2025-12-10T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.568130 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:47:58 crc kubenswrapper[5114]: E1210 15:47:58.568231 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.665082 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.665128 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.665151 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.665170 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.665184 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:58Z","lastTransitionTime":"2025-12-10T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.768169 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.768529 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.768550 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.768574 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.768591 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:58Z","lastTransitionTime":"2025-12-10T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.870255 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.870342 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.870354 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.870370 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.870381 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:58Z","lastTransitionTime":"2025-12-10T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.940262 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" event={"ID":"b38ac556-07b2-4e25-9595-6adae4fcecb7","Type":"ContainerStarted","Data":"65abfd8471c11230c2f2b1508520467040d1ed9b8fb2da56438c8306731f2e27"} Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.940326 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" event={"ID":"b38ac556-07b2-4e25-9595-6adae4fcecb7","Type":"ContainerStarted","Data":"95aa66cb5f9214a9386ee8d4b2b98700f1848f272307ff884ab628c7ebd98b08"} Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.955577 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.967610 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.972536 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.972601 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.972618 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.972641 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.972658 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:58Z","lastTransitionTime":"2025-12-10T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.977602 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:58 crc kubenswrapper[5114]: I1210 15:47:58.991101 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbl48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3e165c-439d-4282-b1e7-179dca439343\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbl48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.003777 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cddacc92-81b7-4948-93c5-5c47e15a9d41\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://82cf7cb8d12a0390623c03e2a919f8f30da8ac13d60bbaaca7bd32778e9816e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8822b68284631476f7526c5a6629b3cbe113320b8716837d4be7ed679ea64b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d65e5ca10eda1aed2b331dff87ea726c9ba50cfbb47bf07c74e0ce4d6d5b99b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bf99e2dd5c01828fb3db803c3d59c571d32f320bec0325579c1510965bea01ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf99e2dd5c01828fb3db803c3d59c571d32f320bec0325579c1510965bea01ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.017166 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.026696 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b38ac556-07b2-4e25-9595-6adae4fcecb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://65abfd8471c11230c2f2b1508520467040d1ed9b8fb2da56438c8306731f2e27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:47:58Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g9ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://95aa66cb5f9214a9386ee8d4b2b98700f1848f272307ff884ab628c7ebd98b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:47:58Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g9ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pvhhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.038369 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-lg6m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7c683ba-536f-45e5-89b0-fe14989cad13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfxbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lg6m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.047238 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gjs2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d8f4a9-0b40-486c-ac70-597d1fab05c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtlfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtlfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gjs2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.056513 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-49rgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"379e5b28-21b4-4727-a60f-0fad71bf89fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2wz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-49rgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.065661 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sg27x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a54715ec-382b-4bb8-bef2-f125ee0bae2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl62h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sg27x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.076245 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.076321 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.076331 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.076347 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.076362 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:59Z","lastTransitionTime":"2025-12-10T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.080842 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89d5aad2-7968-4ff9-a9fa-50a133a77df8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkm4v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkm4v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-79jfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.091690 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fa5e9e-e71a-458f-88e7-57d296462452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b63509d96fe3793fb1dffe2943da9a38a875dd373fbad85638d39878168af249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://108af1094b4ecac73d954933b32171f5e697d11d78490d831db63f315177de7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://108af1094b4ecac73d954933b32171f5e697d11d78490d831db63f315177de7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.104493 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.115925 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.148243 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bgfnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.164020 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4f07611-baa7-42a7-8607-306ed57fb75c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://800d1520c7107344f8b6d771d0fecfb9ca2644d8efe597cabd69c5de72a571ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ec7a41d072aa02f59def36f4c2802872ef70cbd48046c3e3d6f6ccd6b254c53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c19e0260e8980b12b59f394a8355cee2eee1dc159e14081a0ff23cebdd4e9f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1daca1262ac174a242cff74011ab4da1c00a8caaf4bc44b58af5400ae24d3226\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.178618 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.178685 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.178703 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.178732 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.178750 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:59Z","lastTransitionTime":"2025-12-10T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.185446 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14d2b4c9-40f0-4dcb-ad8c-0fe4a5304563\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://85e77e659fccf9ba6e2cc6e99afbafd6be1703e401429ba871243247e0c20a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://447746eb6e190728d80f154f34d6c4c3cd6a364d95c18a4c109e1a2d00fbcf27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://251a7ed18067c8bcbcbcb38700fe905a2a4ebf34fef9f02a6ffc9f78a334bc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43234809c1296bc87d3909492e145b0720e62cf92728f1f24baeac176f8cfc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://4654b1e58183f9508823b58dc37a09482feafd97c887cc56f9d1c793999ee516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://101e3958feb79a37918d043f01289b15aa43519052915151289b2df11a4c798e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://101e3958feb79a37918d043f01289b15aa43519052915151289b2df11a4c798e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://000c0ac3fe264d2edae20d00ae4b904a9c638f104925be4c2999a32625c2384e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://000c0ac3fe264d2edae20d00ae4b904a9c638f104925be4c2999a32625c2384e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://90da8daaae30e60295160aefe8748f6cf28eda2cd17d933569c0320aebc57f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90da8daaae30e60295160aefe8748f6cf28eda2cd17d933569c0320aebc57f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.196778 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e331166d-a33f-44c1-9a3e-f43cfee598a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://c9a7475ba48862dfcb11fe65264384be264b4b7acd30761bc650e70dd3a78abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7398b71862f7cfabefc5644c5d6b4924bbde47edadad7f240aa37599d2b3da9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://55ad03eb1a337191c414a5dbd0864a29632396ff234b68505a9a4b65c90d8eb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d79fc0ad78427693b9ef01519261c475c49b29ab8dc64210c09f22886b3dcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1c010c37667d5c045e43048e4405a03d43afd6ebe7774038d9d5a5c5bb8aaf4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T15:47:00Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1210 15:46:59.465586 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 15:46:59.465755 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1210 15:46:59.466800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823188907/tls.crt::/tmp/serving-cert-3823188907/tls.key\\\\\\\"\\\\nI1210 15:47:00.080067 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 15:47:00.081594 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 15:47:00.081609 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 15:47:00.081631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 15:47:00.081635 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 15:47:00.084952 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 15:47:00.084970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 15:47:00.084974 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 15:47:00.084979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 15:47:00.084982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 15:47:00.084984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 15:47:00.084987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 15:47:00.085095 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 15:47:00.088454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:47:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f8dd78b836cacc6ac7bee1a11730500c94192df5a045eb37ae1c137a3cc0ad6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7e3d3b6b0e188659783d2b384d22a05ba8962e4fa49cd4caae040921c9add613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e3d3b6b0e188659783d2b384d22a05ba8962e4fa49cd4caae040921c9add613\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.282420 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.282528 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.282541 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.282563 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.282577 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:59Z","lastTransitionTime":"2025-12-10T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.385794 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.385850 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.385861 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.385877 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.385888 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:59Z","lastTransitionTime":"2025-12-10T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.488452 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.488499 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.488512 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.488526 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.488536 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:59Z","lastTransitionTime":"2025-12-10T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.567843 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.567954 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.568007 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:47:59 crc kubenswrapper[5114]: E1210 15:47:59.568118 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 10 15:47:59 crc kubenswrapper[5114]: E1210 15:47:59.568345 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 10 15:47:59 crc kubenswrapper[5114]: E1210 15:47:59.568401 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjs2g" podUID="48d8f4a9-0b40-486c-ac70-597d1fab05c1" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.591441 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.591495 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.591509 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.591525 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.591537 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:59Z","lastTransitionTime":"2025-12-10T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.693825 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.693865 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.693874 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.693888 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.693897 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:59Z","lastTransitionTime":"2025-12-10T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.796358 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.796424 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.796443 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.796465 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.796482 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:59Z","lastTransitionTime":"2025-12-10T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.898472 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.898530 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.898548 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.898568 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:47:59 crc kubenswrapper[5114]: I1210 15:47:59.898583 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:47:59Z","lastTransitionTime":"2025-12-10T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.001499 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.001564 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.001580 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.001604 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.001621 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:00Z","lastTransitionTime":"2025-12-10T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.104005 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.104061 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.104074 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.104092 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.104104 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:00Z","lastTransitionTime":"2025-12-10T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.206134 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.206488 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.206500 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.206513 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.206523 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:00Z","lastTransitionTime":"2025-12-10T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.308259 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.308321 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.308330 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.308344 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.308353 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:00Z","lastTransitionTime":"2025-12-10T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.410457 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.410526 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.410544 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.410568 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.410586 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:00Z","lastTransitionTime":"2025-12-10T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.513448 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.513521 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.513541 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.513561 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.513574 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:00Z","lastTransitionTime":"2025-12-10T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.568702 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:48:00 crc kubenswrapper[5114]: E1210 15:48:00.568950 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.615556 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.615602 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.615615 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.615632 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.615641 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:00Z","lastTransitionTime":"2025-12-10T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.717307 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.717356 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.717369 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.717385 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.717396 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:00Z","lastTransitionTime":"2025-12-10T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.819641 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.819734 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.819756 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.819786 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.819820 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:00Z","lastTransitionTime":"2025-12-10T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.922821 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.922873 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.922890 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.922915 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.922938 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:00Z","lastTransitionTime":"2025-12-10T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.948179 5114 generic.go:358] "Generic (PLEG): container finished" podID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerID="b38448aaca5bba30a396046b9ada6c007e6433b291ac82aba2d547ae273e0124" exitCode=0 Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.948343 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" event={"ID":"5bef68a8-63de-4992-87b6-3dc6c70f5a1d","Type":"ContainerDied","Data":"b38448aaca5bba30a396046b9ada6c007e6433b291ac82aba2d547ae273e0124"} Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.970348 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14d2b4c9-40f0-4dcb-ad8c-0fe4a5304563\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://85e77e659fccf9ba6e2cc6e99afbafd6be1703e401429ba871243247e0c20a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://447746eb6e190728d80f154f34d6c4c3cd6a364d95c18a4c109e1a2d00fbcf27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://251a7ed18067c8bcbcbcb38700fe905a2a4ebf34fef9f02a6ffc9f78a334bc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43234809c1296bc87d3909492e145b0720e62cf92728f1f24baeac176f8cfc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://4654b1e58183f9508823b58dc37a09482feafd97c887cc56f9d1c793999ee516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://101e3958feb79a37918d043f01289b15aa43519052915151289b2df11a4c798e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://101e3958feb79a37918d043f01289b15aa43519052915151289b2df11a4c798e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://000c0ac3fe264d2edae20d00ae4b904a9c638f104925be4c2999a32625c2384e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://000c0ac3fe264d2edae20d00ae4b904a9c638f104925be4c2999a32625c2384e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://90da8daaae30e60295160aefe8748f6cf28eda2cd17d933569c0320aebc57f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90da8daaae30e60295160aefe8748f6cf28eda2cd17d933569c0320aebc57f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.982149 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e331166d-a33f-44c1-9a3e-f43cfee598a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://c9a7475ba48862dfcb11fe65264384be264b4b7acd30761bc650e70dd3a78abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7398b71862f7cfabefc5644c5d6b4924bbde47edadad7f240aa37599d2b3da9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://55ad03eb1a337191c414a5dbd0864a29632396ff234b68505a9a4b65c90d8eb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d79fc0ad78427693b9ef01519261c475c49b29ab8dc64210c09f22886b3dcfad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1c010c37667d5c045e43048e4405a03d43afd6ebe7774038d9d5a5c5bb8aaf4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T15:47:00Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1210 15:46:59.465586 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 15:46:59.465755 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1210 15:46:59.466800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823188907/tls.crt::/tmp/serving-cert-3823188907/tls.key\\\\\\\"\\\\nI1210 15:47:00.080067 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 15:47:00.081594 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 15:47:00.081609 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 15:47:00.081631 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 15:47:00.081635 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 15:47:00.084952 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 15:47:00.084970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 15:47:00.084974 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 15:47:00.084979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 15:47:00.084982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 15:47:00.084984 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 15:47:00.084987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 15:47:00.085095 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 15:47:00.088454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:47:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f8dd78b836cacc6ac7bee1a11730500c94192df5a045eb37ae1c137a3cc0ad6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7e3d3b6b0e188659783d2b384d22a05ba8962e4fa49cd4caae040921c9add613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e3d3b6b0e188659783d2b384d22a05ba8962e4fa49cd4caae040921c9add613\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:48:00 crc kubenswrapper[5114]: I1210 15:48:00.991647 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.000525 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.010545 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.028253 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbl48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3e165c-439d-4282-b1e7-179dca439343\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j9xxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbl48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.029632 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.029669 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.029681 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.029699 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.029710 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:01Z","lastTransitionTime":"2025-12-10T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.042126 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cddacc92-81b7-4948-93c5-5c47e15a9d41\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://82cf7cb8d12a0390623c03e2a919f8f30da8ac13d60bbaaca7bd32778e9816e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8822b68284631476f7526c5a6629b3cbe113320b8716837d4be7ed679ea64b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d65e5ca10eda1aed2b331dff87ea726c9ba50cfbb47bf07c74e0ce4d6d5b99b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bf99e2dd5c01828fb3db803c3d59c571d32f320bec0325579c1510965bea01ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf99e2dd5c01828fb3db803c3d59c571d32f320bec0325579c1510965bea01ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.051600 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.061841 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b38ac556-07b2-4e25-9595-6adae4fcecb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://65abfd8471c11230c2f2b1508520467040d1ed9b8fb2da56438c8306731f2e27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:47:58Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g9ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://95aa66cb5f9214a9386ee8d4b2b98700f1848f272307ff884ab628c7ebd98b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:47:58Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g9ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pvhhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.073311 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-lg6m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7c683ba-536f-45e5-89b0-fe14989cad13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfxbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lg6m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.081522 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gjs2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d8f4a9-0b40-486c-ac70-597d1fab05c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtlfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtlfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gjs2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.089044 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-49rgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"379e5b28-21b4-4727-a60f-0fad71bf89fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2wz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-49rgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.096620 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sg27x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a54715ec-382b-4bb8-bef2-f125ee0bae2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl62h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sg27x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.109473 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89d5aad2-7968-4ff9-a9fa-50a133a77df8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkm4v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkm4v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-79jfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.118702 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fa5e9e-e71a-458f-88e7-57d296462452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b63509d96fe3793fb1dffe2943da9a38a875dd373fbad85638d39878168af249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://108af1094b4ecac73d954933b32171f5e697d11d78490d831db63f315177de7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://108af1094b4ecac73d954933b32171f5e697d11d78490d831db63f315177de7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.127523 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.131372 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.131421 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.131434 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.131453 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.131467 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:01Z","lastTransitionTime":"2025-12-10T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.137954 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.153418 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b38448aaca5bba30a396046b9ada6c007e6433b291ac82aba2d547ae273e0124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b38448aaca5bba30a396046b9ada6c007e6433b291ac82aba2d547ae273e0124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:48:00Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgklm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:47:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bgfnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.163373 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4f07611-baa7-42a7-8607-306ed57fb75c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://800d1520c7107344f8b6d771d0fecfb9ca2644d8efe597cabd69c5de72a571ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ec7a41d072aa02f59def36f4c2802872ef70cbd48046c3e3d6f6ccd6b254c53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c19e0260e8980b12b59f394a8355cee2eee1dc159e14081a0ff23cebdd4e9f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1daca1262ac174a242cff74011ab4da1c00a8caaf4bc44b58af5400ae24d3226\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:46:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:46:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.233928 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.234002 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.234015 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.234033 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.234046 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:01Z","lastTransitionTime":"2025-12-10T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.336064 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.336101 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.336110 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.336125 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.336134 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:01Z","lastTransitionTime":"2025-12-10T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.437664 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.437721 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.437733 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.437748 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.437759 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:01Z","lastTransitionTime":"2025-12-10T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.540210 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.540287 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.540305 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.540325 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.540337 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:01Z","lastTransitionTime":"2025-12-10T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.568698 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:48:01 crc kubenswrapper[5114]: E1210 15:48:01.568843 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.570064 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:48:01 crc kubenswrapper[5114]: E1210 15:48:01.570148 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjs2g" podUID="48d8f4a9-0b40-486c-ac70-597d1fab05c1" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.570328 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:48:01 crc kubenswrapper[5114]: E1210 15:48:01.570395 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.643470 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.643554 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.643593 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.643622 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.643634 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:01Z","lastTransitionTime":"2025-12-10T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.745545 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.745586 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.745596 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.745611 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.745623 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:01Z","lastTransitionTime":"2025-12-10T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.847904 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.847944 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.847956 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.847972 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.847984 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:01Z","lastTransitionTime":"2025-12-10T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.950025 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.950371 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.950383 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.950398 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.950407 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:01Z","lastTransitionTime":"2025-12-10T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.956984 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" event={"ID":"5bef68a8-63de-4992-87b6-3dc6c70f5a1d","Type":"ContainerStarted","Data":"2cfed98aeec135d93b96d6ec6155091f30a0164660db4d06ba4aa1ffee4edf9b"} Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.957029 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" event={"ID":"5bef68a8-63de-4992-87b6-3dc6c70f5a1d","Type":"ContainerStarted","Data":"7453b04cfa74e156eca43d1a9ada6956017f813682a7e71c3aadde2b561e8728"} Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.957045 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" event={"ID":"5bef68a8-63de-4992-87b6-3dc6c70f5a1d","Type":"ContainerStarted","Data":"6796f012d93b2cae04b3abdaaedd096fd68fa55bc1e88ed84566ca0f045c1add"} Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.957061 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" event={"ID":"5bef68a8-63de-4992-87b6-3dc6c70f5a1d","Type":"ContainerStarted","Data":"dcaf375a91ff6afa873a1942a68f4ef320684df70f7520248a0737cdb610ae8d"} Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.957078 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" event={"ID":"5bef68a8-63de-4992-87b6-3dc6c70f5a1d","Type":"ContainerStarted","Data":"67d6a271759fdaaef631e0624f4fe9cc2c394c0f2ed5356595f7b6940bfa44b1"} Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.957092 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" event={"ID":"5bef68a8-63de-4992-87b6-3dc6c70f5a1d","Type":"ContainerStarted","Data":"1d9ba53c807a45e250f9fd80efef0203c360744071040db04cbdfa9d322d0b75"} Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.958842 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"c4f63e525477fcce7ec4fa0f057428fe055970830760eaaa1d317104a7920535"} Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.958872 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"80ebc43d6390713ba2ca707b207a593b716a9e11be4efcc4af881a9d28c2a90f"} Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.960465 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-49rgv" event={"ID":"379e5b28-21b4-4727-a60f-0fad71bf89fa","Type":"ContainerStarted","Data":"72a45f5b571f53360600da8d0b7536d20f7ceaa2b1a37886269c89386a3cda04"} Dec 10 15:48:01 crc kubenswrapper[5114]: I1210 15:48:01.977951 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=40.97789116 podStartE2EDuration="40.97789116s" podCreationTimestamp="2025-12-10 15:47:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:01.977362547 +0000 UTC m=+107.698163734" watchObservedRunningTime="2025-12-10 15:48:01.97789116 +0000 UTC m=+107.698692377" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.052435 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.052482 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.052498 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.052519 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.052534 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:02Z","lastTransitionTime":"2025-12-10T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.056789 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=41.056776032 podStartE2EDuration="41.056776032s" podCreationTimestamp="2025-12-10 15:47:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:02.055998452 +0000 UTC m=+107.776799639" watchObservedRunningTime="2025-12-10 15:48:02.056776032 +0000 UTC m=+107.777577219" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.099543 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=41.099522881 podStartE2EDuration="41.099522881s" podCreationTimestamp="2025-12-10 15:47:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:02.098837904 +0000 UTC m=+107.819639091" watchObservedRunningTime="2025-12-10 15:48:02.099522881 +0000 UTC m=+107.820324078" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.132954 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=41.132881133 podStartE2EDuration="41.132881133s" podCreationTimestamp="2025-12-10 15:47:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:02.115961466 +0000 UTC m=+107.836762673" watchObservedRunningTime="2025-12-10 15:48:02.132881133 +0000 UTC m=+107.853682351" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.154662 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.154706 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.154716 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.154747 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.154758 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:02Z","lastTransitionTime":"2025-12-10T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.202528 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=41.202477901 podStartE2EDuration="41.202477901s" podCreationTimestamp="2025-12-10 15:47:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:02.201763593 +0000 UTC m=+107.922564790" watchObservedRunningTime="2025-12-10 15:48:02.202477901 +0000 UTC m=+107.923279078" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.232172 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" podStartSLOduration=89.23214141 podStartE2EDuration="1m29.23214141s" podCreationTimestamp="2025-12-10 15:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:02.231438862 +0000 UTC m=+107.952240059" watchObservedRunningTime="2025-12-10 15:48:02.23214141 +0000 UTC m=+107.952942867" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.256748 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.256818 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.256840 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.256866 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.256885 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:02Z","lastTransitionTime":"2025-12-10T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.318548 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-49rgv" podStartSLOduration=89.31851935 podStartE2EDuration="1m29.31851935s" podCreationTimestamp="2025-12-10 15:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:02.317197637 +0000 UTC m=+108.037998814" watchObservedRunningTime="2025-12-10 15:48:02.31851935 +0000 UTC m=+108.039320567" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.358810 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.358859 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.358872 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.358889 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.358902 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:02Z","lastTransitionTime":"2025-12-10T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.461410 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.461456 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.461467 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.461483 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.461493 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:02Z","lastTransitionTime":"2025-12-10T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.564056 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.564110 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.564123 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.564140 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.564155 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:02Z","lastTransitionTime":"2025-12-10T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.568926 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:48:02 crc kubenswrapper[5114]: E1210 15:48:02.569128 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.666010 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.666056 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.666070 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.666092 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.666108 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:02Z","lastTransitionTime":"2025-12-10T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.767933 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.767978 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.767995 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.768019 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.768035 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:02Z","lastTransitionTime":"2025-12-10T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.870680 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.870737 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.870750 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.870770 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.870783 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:02Z","lastTransitionTime":"2025-12-10T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.966620 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-sg27x" event={"ID":"a54715ec-382b-4bb8-bef2-f125ee0bae2b","Type":"ContainerStarted","Data":"186c5c4ed52a1eec125d91e973e6e0fc817bf39c5479d6f56b0cd551b4f4f726"} Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.968322 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"24306a24f60dc4cbf353358de9bbab8474b50ca7e21438606127b00b1cdf86ac"} Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.972462 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.972522 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.972542 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.972564 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.972583 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:02Z","lastTransitionTime":"2025-12-10T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:02 crc kubenswrapper[5114]: I1210 15:48:02.981405 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-sg27x" podStartSLOduration=89.981388536 podStartE2EDuration="1m29.981388536s" podCreationTimestamp="2025-12-10 15:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:02.980574255 +0000 UTC m=+108.701375442" watchObservedRunningTime="2025-12-10 15:48:02.981388536 +0000 UTC m=+108.702189723" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.075315 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.075372 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.075388 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.075411 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.075431 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:03Z","lastTransitionTime":"2025-12-10T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.177770 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.177856 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.177880 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.177910 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.177934 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:03Z","lastTransitionTime":"2025-12-10T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.280154 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.280206 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.280218 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.280233 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.280246 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:03Z","lastTransitionTime":"2025-12-10T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.381868 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.381925 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.381943 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.381967 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.381983 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:03Z","lastTransitionTime":"2025-12-10T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.484393 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.484439 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.484451 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.484467 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.484478 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:03Z","lastTransitionTime":"2025-12-10T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.568093 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.568117 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.568206 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:48:03 crc kubenswrapper[5114]: E1210 15:48:03.568370 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjs2g" podUID="48d8f4a9-0b40-486c-ac70-597d1fab05c1" Dec 10 15:48:03 crc kubenswrapper[5114]: E1210 15:48:03.568781 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 10 15:48:03 crc kubenswrapper[5114]: E1210 15:48:03.568884 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.590225 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.590571 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.590584 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.590602 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.590615 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:03Z","lastTransitionTime":"2025-12-10T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.696640 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.696693 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.696705 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.696722 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.696737 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:03Z","lastTransitionTime":"2025-12-10T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.798433 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.798503 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.798522 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.798547 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.798565 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:03Z","lastTransitionTime":"2025-12-10T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.900640 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.900693 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.900707 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.900725 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.900740 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:03Z","lastTransitionTime":"2025-12-10T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.977904 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" event={"ID":"5bef68a8-63de-4992-87b6-3dc6c70f5a1d","Type":"ContainerStarted","Data":"7c5a891aeb984e12705fc11a5c58e8ab9e9a1966be0807109964e42a498c1a48"} Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.979543 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lg6m5" event={"ID":"e7c683ba-536f-45e5-89b0-fe14989cad13","Type":"ContainerStarted","Data":"9bc56c41fabe5c4fd3e8cb8cc42b49588c7a28d1cb287728e0ecab178f638cec"} Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.982215 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" event={"ID":"89d5aad2-7968-4ff9-a9fa-50a133a77df8","Type":"ContainerStarted","Data":"c60837dea96be59955b7dfa612389eee467674f0b87c6fa5a283553b24dd8382"} Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.982250 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" event={"ID":"89d5aad2-7968-4ff9-a9fa-50a133a77df8","Type":"ContainerStarted","Data":"afe25505ab1ac853897e6caebc447ad61b2a5d9dfa6bdf2d9f9d3a7bf5002e4e"} Dec 10 15:48:03 crc kubenswrapper[5114]: I1210 15:48:03.999031 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-lg6m5" podStartSLOduration=90.999008768 podStartE2EDuration="1m30.999008768s" podCreationTimestamp="2025-12-10 15:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:03.997835988 +0000 UTC m=+109.718637165" watchObservedRunningTime="2025-12-10 15:48:03.999008768 +0000 UTC m=+109.719809945" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.002659 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.002720 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.002732 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.002753 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.002767 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:04Z","lastTransitionTime":"2025-12-10T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.104663 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.104699 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.104708 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.104720 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.104729 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:04Z","lastTransitionTime":"2025-12-10T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.206325 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.206369 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.206379 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.206393 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.206424 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:04Z","lastTransitionTime":"2025-12-10T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.308408 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.308446 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.308457 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.308471 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.308482 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:04Z","lastTransitionTime":"2025-12-10T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.410355 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.410394 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.410403 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.410416 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.410425 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:04Z","lastTransitionTime":"2025-12-10T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.512191 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.512241 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.512253 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.512287 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.512300 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:04Z","lastTransitionTime":"2025-12-10T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.569712 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:48:04 crc kubenswrapper[5114]: E1210 15:48:04.569851 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.614426 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.614506 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.614525 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.614550 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.614567 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:04Z","lastTransitionTime":"2025-12-10T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.716754 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.716792 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.716806 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.716821 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.716832 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:04Z","lastTransitionTime":"2025-12-10T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.819415 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.819496 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.819511 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.819557 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.819580 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:04Z","lastTransitionTime":"2025-12-10T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.922227 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.922365 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.922394 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.922452 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.922481 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:04Z","lastTransitionTime":"2025-12-10T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:04 crc kubenswrapper[5114]: I1210 15:48:04.994499 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"e317f34f7227b850aaafe5654d58b7439ef7e404d8f2c9eea5477833d7b71435"} Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.014069 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" podStartSLOduration=91.014050683 podStartE2EDuration="1m31.014050683s" podCreationTimestamp="2025-12-10 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:04.021047654 +0000 UTC m=+109.741848841" watchObservedRunningTime="2025-12-10 15:48:05.014050683 +0000 UTC m=+110.734851860" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.025176 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.025266 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.025293 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.025309 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.025321 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:05Z","lastTransitionTime":"2025-12-10T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.128017 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.128125 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.128151 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.128179 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.128204 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:05Z","lastTransitionTime":"2025-12-10T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.230591 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.230652 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.230669 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.230688 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.230703 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:05Z","lastTransitionTime":"2025-12-10T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.333862 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.333937 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.333964 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.333994 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.334016 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:05Z","lastTransitionTime":"2025-12-10T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.435822 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.435879 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.435895 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.435913 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.435927 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:05Z","lastTransitionTime":"2025-12-10T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.538625 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.538701 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.538730 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.538756 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.538774 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:05Z","lastTransitionTime":"2025-12-10T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.568310 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.568387 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:48:05 crc kubenswrapper[5114]: E1210 15:48:05.568494 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.568388 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:48:05 crc kubenswrapper[5114]: E1210 15:48:05.568578 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 10 15:48:05 crc kubenswrapper[5114]: E1210 15:48:05.568759 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjs2g" podUID="48d8f4a9-0b40-486c-ac70-597d1fab05c1" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.640994 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.641058 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.641075 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.641097 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.641113 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:05Z","lastTransitionTime":"2025-12-10T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.743527 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.743604 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.743630 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.743660 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.743687 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:05Z","lastTransitionTime":"2025-12-10T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.845750 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.845804 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.845817 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.845833 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.845844 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:05Z","lastTransitionTime":"2025-12-10T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.960155 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.960511 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.960524 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.960542 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:05 crc kubenswrapper[5114]: I1210 15:48:05.960555 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:05Z","lastTransitionTime":"2025-12-10T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.003057 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" event={"ID":"5bef68a8-63de-4992-87b6-3dc6c70f5a1d","Type":"ContainerStarted","Data":"9510cc8bb6d372e78924e4b1bf6e37e9a71cfc399a5bf10d29cb1d0573722165"} Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.003487 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.003518 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.003528 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.031821 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" podStartSLOduration=93.031794158 podStartE2EDuration="1m33.031794158s" podCreationTimestamp="2025-12-10 15:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:06.031315226 +0000 UTC m=+111.752116423" watchObservedRunningTime="2025-12-10 15:48:06.031794158 +0000 UTC m=+111.752595345" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.032208 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.034641 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.065669 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.065723 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.065739 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.065761 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.065776 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:06Z","lastTransitionTime":"2025-12-10T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.168778 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.168853 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.168863 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.168877 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.168885 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:06Z","lastTransitionTime":"2025-12-10T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.271138 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.271179 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.271190 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.271206 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.271217 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:06Z","lastTransitionTime":"2025-12-10T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.373444 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.373496 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.373510 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.373528 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.373540 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:06Z","lastTransitionTime":"2025-12-10T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.478735 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.478789 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.478804 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.478822 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.478836 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:06Z","lastTransitionTime":"2025-12-10T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.568166 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:48:06 crc kubenswrapper[5114]: E1210 15:48:06.568335 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.580934 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.580996 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.581016 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.581038 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.581053 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:06Z","lastTransitionTime":"2025-12-10T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.682533 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.682579 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.682593 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.682608 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.682619 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:06Z","lastTransitionTime":"2025-12-10T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.784209 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.784252 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.784264 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.784297 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.784309 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:06Z","lastTransitionTime":"2025-12-10T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.885950 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.885998 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.886007 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.886020 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.886029 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:06Z","lastTransitionTime":"2025-12-10T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.988633 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.988671 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.988682 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.988699 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:06 crc kubenswrapper[5114]: I1210 15:48:06.988710 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:06Z","lastTransitionTime":"2025-12-10T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.091058 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.091169 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.091181 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.091197 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.091210 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:07Z","lastTransitionTime":"2025-12-10T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.192992 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.193042 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.193084 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.193124 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.193136 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:07Z","lastTransitionTime":"2025-12-10T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.296145 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.296213 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.296232 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.296255 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.296303 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:07Z","lastTransitionTime":"2025-12-10T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.398902 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.398949 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.398962 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.398980 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.398992 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:07Z","lastTransitionTime":"2025-12-10T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.500985 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.501023 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.501032 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.501047 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.501057 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:07Z","lastTransitionTime":"2025-12-10T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.521434 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.521464 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.521476 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.521490 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.521500 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:48:07Z","lastTransitionTime":"2025-12-10T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.562500 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-s6rdx"] Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.565214 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-s6rdx" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.567332 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.567751 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.567803 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.567827 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 10 15:48:07 crc kubenswrapper[5114]: E1210 15:48:07.567914 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjs2g" podUID="48d8f4a9-0b40-486c-ac70-597d1fab05c1" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.567838 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.567837 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:48:07 crc kubenswrapper[5114]: E1210 15:48:07.568116 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 10 15:48:07 crc kubenswrapper[5114]: E1210 15:48:07.568222 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.569029 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.572772 5114 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.582065 5114 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.618460 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9f80fd2-ac48-4bc5-bcc7-1869e78ed4ce-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-s6rdx\" (UID: \"c9f80fd2-ac48-4bc5-bcc7-1869e78ed4ce\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-s6rdx" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.618508 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c9f80fd2-ac48-4bc5-bcc7-1869e78ed4ce-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-s6rdx\" (UID: \"c9f80fd2-ac48-4bc5-bcc7-1869e78ed4ce\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-s6rdx" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.618526 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9f80fd2-ac48-4bc5-bcc7-1869e78ed4ce-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-s6rdx\" (UID: \"c9f80fd2-ac48-4bc5-bcc7-1869e78ed4ce\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-s6rdx" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.618559 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c9f80fd2-ac48-4bc5-bcc7-1869e78ed4ce-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-s6rdx\" (UID: \"c9f80fd2-ac48-4bc5-bcc7-1869e78ed4ce\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-s6rdx" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.618679 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c9f80fd2-ac48-4bc5-bcc7-1869e78ed4ce-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-s6rdx\" (UID: \"c9f80fd2-ac48-4bc5-bcc7-1869e78ed4ce\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-s6rdx" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.719404 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c9f80fd2-ac48-4bc5-bcc7-1869e78ed4ce-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-s6rdx\" (UID: \"c9f80fd2-ac48-4bc5-bcc7-1869e78ed4ce\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-s6rdx" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.719441 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c9f80fd2-ac48-4bc5-bcc7-1869e78ed4ce-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-s6rdx\" (UID: \"c9f80fd2-ac48-4bc5-bcc7-1869e78ed4ce\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-s6rdx" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.719499 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9f80fd2-ac48-4bc5-bcc7-1869e78ed4ce-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-s6rdx\" (UID: \"c9f80fd2-ac48-4bc5-bcc7-1869e78ed4ce\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-s6rdx" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.719521 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c9f80fd2-ac48-4bc5-bcc7-1869e78ed4ce-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-s6rdx\" (UID: \"c9f80fd2-ac48-4bc5-bcc7-1869e78ed4ce\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-s6rdx" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.719582 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c9f80fd2-ac48-4bc5-bcc7-1869e78ed4ce-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-s6rdx\" (UID: \"c9f80fd2-ac48-4bc5-bcc7-1869e78ed4ce\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-s6rdx" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.719680 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c9f80fd2-ac48-4bc5-bcc7-1869e78ed4ce-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-s6rdx\" (UID: \"c9f80fd2-ac48-4bc5-bcc7-1869e78ed4ce\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-s6rdx" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.719704 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9f80fd2-ac48-4bc5-bcc7-1869e78ed4ce-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-s6rdx\" (UID: \"c9f80fd2-ac48-4bc5-bcc7-1869e78ed4ce\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-s6rdx" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.721300 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c9f80fd2-ac48-4bc5-bcc7-1869e78ed4ce-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-s6rdx\" (UID: \"c9f80fd2-ac48-4bc5-bcc7-1869e78ed4ce\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-s6rdx" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.727107 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9f80fd2-ac48-4bc5-bcc7-1869e78ed4ce-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-s6rdx\" (UID: \"c9f80fd2-ac48-4bc5-bcc7-1869e78ed4ce\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-s6rdx" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.745888 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9f80fd2-ac48-4bc5-bcc7-1869e78ed4ce-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-s6rdx\" (UID: \"c9f80fd2-ac48-4bc5-bcc7-1869e78ed4ce\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-s6rdx" Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.849192 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gjs2g"] Dec 10 15:48:07 crc kubenswrapper[5114]: I1210 15:48:07.878259 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-s6rdx" Dec 10 15:48:08 crc kubenswrapper[5114]: I1210 15:48:08.010236 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-s6rdx" event={"ID":"c9f80fd2-ac48-4bc5-bcc7-1869e78ed4ce","Type":"ContainerStarted","Data":"00a805049575a7b7dd1325152b1f6d35c69308b5cede93b7f5b6a53077cee8ae"} Dec 10 15:48:08 crc kubenswrapper[5114]: I1210 15:48:08.012227 5114 generic.go:358] "Generic (PLEG): container finished" podID="3a3e165c-439d-4282-b1e7-179dca439343" containerID="4c98c7255247a4a1c5a65e73c01e6fac0765089b26b2fd6b46474836f0455666" exitCode=0 Dec 10 15:48:08 crc kubenswrapper[5114]: I1210 15:48:08.012340 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:48:08 crc kubenswrapper[5114]: E1210 15:48:08.012446 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjs2g" podUID="48d8f4a9-0b40-486c-ac70-597d1fab05c1" Dec 10 15:48:08 crc kubenswrapper[5114]: I1210 15:48:08.012571 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wbl48" event={"ID":"3a3e165c-439d-4282-b1e7-179dca439343","Type":"ContainerDied","Data":"4c98c7255247a4a1c5a65e73c01e6fac0765089b26b2fd6b46474836f0455666"} Dec 10 15:48:08 crc kubenswrapper[5114]: I1210 15:48:08.568732 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:48:08 crc kubenswrapper[5114]: E1210 15:48:08.569473 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 10 15:48:09 crc kubenswrapper[5114]: I1210 15:48:09.016803 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-s6rdx" event={"ID":"c9f80fd2-ac48-4bc5-bcc7-1869e78ed4ce","Type":"ContainerStarted","Data":"d52af372faaf5a2c05a6f0684772824bb128d36bae9d193d5d615c7a109650d8"} Dec 10 15:48:09 crc kubenswrapper[5114]: I1210 15:48:09.019075 5114 generic.go:358] "Generic (PLEG): container finished" podID="3a3e165c-439d-4282-b1e7-179dca439343" containerID="fa551831c385f165e98b20a50275cfc4f1f0ed39a270da758e54465cdc56da8a" exitCode=0 Dec 10 15:48:09 crc kubenswrapper[5114]: I1210 15:48:09.019155 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wbl48" event={"ID":"3a3e165c-439d-4282-b1e7-179dca439343","Type":"ContainerDied","Data":"fa551831c385f165e98b20a50275cfc4f1f0ed39a270da758e54465cdc56da8a"} Dec 10 15:48:09 crc kubenswrapper[5114]: I1210 15:48:09.058375 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-s6rdx" podStartSLOduration=96.058360709 podStartE2EDuration="1m36.058360709s" podCreationTimestamp="2025-12-10 15:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:09.03462708 +0000 UTC m=+114.755428257" watchObservedRunningTime="2025-12-10 15:48:09.058360709 +0000 UTC m=+114.779161886" Dec 10 15:48:09 crc kubenswrapper[5114]: I1210 15:48:09.567812 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:48:09 crc kubenswrapper[5114]: I1210 15:48:09.567877 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:48:09 crc kubenswrapper[5114]: E1210 15:48:09.567974 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 10 15:48:09 crc kubenswrapper[5114]: E1210 15:48:09.568040 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjs2g" podUID="48d8f4a9-0b40-486c-ac70-597d1fab05c1" Dec 10 15:48:09 crc kubenswrapper[5114]: I1210 15:48:09.568118 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:48:09 crc kubenswrapper[5114]: E1210 15:48:09.568288 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 10 15:48:10 crc kubenswrapper[5114]: I1210 15:48:10.568497 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:48:10 crc kubenswrapper[5114]: E1210 15:48:10.568657 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 10 15:48:11 crc kubenswrapper[5114]: I1210 15:48:11.028802 5114 generic.go:358] "Generic (PLEG): container finished" podID="3a3e165c-439d-4282-b1e7-179dca439343" containerID="de9be45dfba409d2ac46e5969f7c1549abd7199ebaac04752504f365946f3523" exitCode=0 Dec 10 15:48:11 crc kubenswrapper[5114]: I1210 15:48:11.028853 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wbl48" event={"ID":"3a3e165c-439d-4282-b1e7-179dca439343","Type":"ContainerDied","Data":"de9be45dfba409d2ac46e5969f7c1549abd7199ebaac04752504f365946f3523"} Dec 10 15:48:11 crc kubenswrapper[5114]: I1210 15:48:11.568035 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:48:11 crc kubenswrapper[5114]: E1210 15:48:11.568381 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 10 15:48:11 crc kubenswrapper[5114]: I1210 15:48:11.568142 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:48:11 crc kubenswrapper[5114]: I1210 15:48:11.568178 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:48:11 crc kubenswrapper[5114]: E1210 15:48:11.568681 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjs2g" podUID="48d8f4a9-0b40-486c-ac70-597d1fab05c1" Dec 10 15:48:11 crc kubenswrapper[5114]: E1210 15:48:11.568790 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 10 15:48:12 crc kubenswrapper[5114]: I1210 15:48:12.033323 5114 generic.go:358] "Generic (PLEG): container finished" podID="3a3e165c-439d-4282-b1e7-179dca439343" containerID="4bb42ea8aef80bf6ff9c21824f452ba3a6bd4911f6f2bc8bf95ef0ced8d3eff7" exitCode=0 Dec 10 15:48:12 crc kubenswrapper[5114]: I1210 15:48:12.033380 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wbl48" event={"ID":"3a3e165c-439d-4282-b1e7-179dca439343","Type":"ContainerDied","Data":"4bb42ea8aef80bf6ff9c21824f452ba3a6bd4911f6f2bc8bf95ef0ced8d3eff7"} Dec 10 15:48:12 crc kubenswrapper[5114]: I1210 15:48:12.568319 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:48:12 crc kubenswrapper[5114]: E1210 15:48:12.568514 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 10 15:48:12 crc kubenswrapper[5114]: I1210 15:48:12.928117 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Dec 10 15:48:12 crc kubenswrapper[5114]: I1210 15:48:12.928641 5114 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Dec 10 15:48:12 crc kubenswrapper[5114]: I1210 15:48:12.955954 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-llrrx"] Dec 10 15:48:12 crc kubenswrapper[5114]: I1210 15:48:12.958889 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:12 crc kubenswrapper[5114]: I1210 15:48:12.975301 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kb5vt"] Dec 10 15:48:12 crc kubenswrapper[5114]: I1210 15:48:12.976830 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 10 15:48:12 crc kubenswrapper[5114]: I1210 15:48:12.977927 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 10 15:48:12 crc kubenswrapper[5114]: I1210 15:48:12.978441 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 10 15:48:12 crc kubenswrapper[5114]: I1210 15:48:12.978645 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 10 15:48:12 crc kubenswrapper[5114]: I1210 15:48:12.979021 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kb5vt" Dec 10 15:48:12 crc kubenswrapper[5114]: I1210 15:48:12.979892 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 10 15:48:12 crc kubenswrapper[5114]: I1210 15:48:12.979944 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 10 15:48:12 crc kubenswrapper[5114]: I1210 15:48:12.980103 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m26t8"] Dec 10 15:48:12 crc kubenswrapper[5114]: I1210 15:48:12.980180 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 10 15:48:12 crc kubenswrapper[5114]: I1210 15:48:12.985861 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 10 15:48:12 crc kubenswrapper[5114]: I1210 15:48:12.986155 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 10 15:48:12 crc kubenswrapper[5114]: E1210 15:48:12.986458 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-controller-manager-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-kube-controller-manager-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" type="*v1.ConfigMap" Dec 10 15:48:12 crc kubenswrapper[5114]: I1210 15:48:12.986554 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 10 15:48:12 crc kubenswrapper[5114]: E1210 15:48:12.986322 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"kube-controller-manager-operator-dockercfg-tnfx9\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-kube-controller-manager-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" type="*v1.Secret" Dec 10 15:48:12 crc kubenswrapper[5114]: I1210 15:48:12.986838 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 10 15:48:12 crc kubenswrapper[5114]: I1210 15:48:12.988087 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-b4cz4"] Dec 10 15:48:12 crc kubenswrapper[5114]: I1210 15:48:12.989143 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m26t8" Dec 10 15:48:12 crc kubenswrapper[5114]: I1210 15:48:12.993544 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp"] Dec 10 15:48:12 crc kubenswrapper[5114]: I1210 15:48:12.994371 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-b4cz4" Dec 10 15:48:12 crc kubenswrapper[5114]: I1210 15:48:12.996811 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.016836 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.017286 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.017399 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.017301 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.017850 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.018169 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-cfjf4"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.018344 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.023345 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-cfjf4" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.024752 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.024860 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.025069 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.025247 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.025451 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.026947 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-7nbcs"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.027534 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.027578 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.031875 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.032072 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.032198 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.032348 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.032527 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.032794 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.036368 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-7nbcs" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.051860 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.052047 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.052147 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.053315 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x9hfx"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.056431 5114 generic.go:358] "Generic (PLEG): container finished" podID="3a3e165c-439d-4282-b1e7-179dca439343" containerID="5c396fdc8fff0d24f6ec20a08c7800fb7420135ebede9cf61280daaef8cabeb4" exitCode=0 Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.056638 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wbl48" event={"ID":"3a3e165c-439d-4282-b1e7-179dca439343","Type":"ContainerDied","Data":"5c396fdc8fff0d24f6ec20a08c7800fb7420135ebede9cf61280daaef8cabeb4"} Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.057061 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x9hfx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.064631 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.064789 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.065021 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.065047 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.065694 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mr6mk"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.065927 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.069148 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.069373 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.092632 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-57xp7"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.096056 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-288ln"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.099327 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-288ln" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.099796 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mr6mk" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.100043 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-57xp7" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.102559 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.102706 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wbclb"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.102803 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.103988 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.104022 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.117876 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.117974 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.117998 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.119523 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.119619 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.119527 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.122498 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4064ac8e-a335-40db-a1d6-38f9e8838fbf-encryption-config\") pod \"apiserver-8596bd845d-bdmmp\" (UID: \"4064ac8e-a335-40db-a1d6-38f9e8838fbf\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.122530 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b6e28a6-b1a9-4942-8457-e54258393016-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-llrrx\" (UID: \"8b6e28a6-b1a9-4942-8457-e54258393016\") " pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.122550 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4064ac8e-a335-40db-a1d6-38f9e8838fbf-audit-policies\") pod \"apiserver-8596bd845d-bdmmp\" (UID: \"4064ac8e-a335-40db-a1d6-38f9e8838fbf\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.122662 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8b6e28a6-b1a9-4942-8457-e54258393016-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-llrrx\" (UID: \"8b6e28a6-b1a9-4942-8457-e54258393016\") " pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.122689 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.122709 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1703270b-65b8-4361-a26e-f6b5475b01d0-config\") pod \"openshift-controller-manager-operator-686468bdd5-m26t8\" (UID: \"1703270b-65b8-4361-a26e-f6b5475b01d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m26t8" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.122727 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.122748 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffd4ccf2-5090-485f-8b42-ca4c2c6f293d-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-kb5vt\" (UID: \"ffd4ccf2-5090-485f-8b42-ca4c2c6f293d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kb5vt" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.122769 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffd4ccf2-5090-485f-8b42-ca4c2c6f293d-config\") pod \"kube-controller-manager-operator-69d5f845f8-kb5vt\" (UID: \"ffd4ccf2-5090-485f-8b42-ca4c2c6f293d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kb5vt" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.122818 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4064ac8e-a335-40db-a1d6-38f9e8838fbf-etcd-client\") pod \"apiserver-8596bd845d-bdmmp\" (UID: \"4064ac8e-a335-40db-a1d6-38f9e8838fbf\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.122843 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8b6e28a6-b1a9-4942-8457-e54258393016-audit\") pod \"apiserver-9ddfb9f55-llrrx\" (UID: \"8b6e28a6-b1a9-4942-8457-e54258393016\") " pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.122885 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b6e28a6-b1a9-4942-8457-e54258393016-config\") pod \"apiserver-9ddfb9f55-llrrx\" (UID: \"8b6e28a6-b1a9-4942-8457-e54258393016\") " pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.122908 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdfvl\" (UniqueName: \"kubernetes.io/projected/44f3c5cd-6bfe-4b81-b822-bfb31ef6e223-kube-api-access-qdfvl\") pod \"cluster-samples-operator-6b564684c8-b4cz4\" (UID: \"44f3c5cd-6bfe-4b81-b822-bfb31ef6e223\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-b4cz4" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.122942 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8kd9\" (UniqueName: \"kubernetes.io/projected/8b6e28a6-b1a9-4942-8457-e54258393016-kube-api-access-k8kd9\") pod \"apiserver-9ddfb9f55-llrrx\" (UID: \"8b6e28a6-b1a9-4942-8457-e54258393016\") " pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.123011 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1703270b-65b8-4361-a26e-f6b5475b01d0-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-m26t8\" (UID: \"1703270b-65b8-4361-a26e-f6b5475b01d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m26t8" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.123060 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b6e28a6-b1a9-4942-8457-e54258393016-serving-cert\") pod \"apiserver-9ddfb9f55-llrrx\" (UID: \"8b6e28a6-b1a9-4942-8457-e54258393016\") " pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.123099 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8b6e28a6-b1a9-4942-8457-e54258393016-encryption-config\") pod \"apiserver-9ddfb9f55-llrrx\" (UID: \"8b6e28a6-b1a9-4942-8457-e54258393016\") " pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.123127 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4064ac8e-a335-40db-a1d6-38f9e8838fbf-audit-dir\") pod \"apiserver-8596bd845d-bdmmp\" (UID: \"4064ac8e-a335-40db-a1d6-38f9e8838fbf\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.123152 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8b6e28a6-b1a9-4942-8457-e54258393016-etcd-client\") pod \"apiserver-9ddfb9f55-llrrx\" (UID: \"8b6e28a6-b1a9-4942-8457-e54258393016\") " pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.123171 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ffd4ccf2-5090-485f-8b42-ca4c2c6f293d-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-kb5vt\" (UID: \"ffd4ccf2-5090-485f-8b42-ca4c2c6f293d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kb5vt" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.123197 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2vlx\" (UniqueName: \"kubernetes.io/projected/1703270b-65b8-4361-a26e-f6b5475b01d0-kube-api-access-m2vlx\") pod \"openshift-controller-manager-operator-686468bdd5-m26t8\" (UID: \"1703270b-65b8-4361-a26e-f6b5475b01d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m26t8" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.123220 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4064ac8e-a335-40db-a1d6-38f9e8838fbf-serving-cert\") pod \"apiserver-8596bd845d-bdmmp\" (UID: \"4064ac8e-a335-40db-a1d6-38f9e8838fbf\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.123247 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8b6e28a6-b1a9-4942-8457-e54258393016-audit-dir\") pod \"apiserver-9ddfb9f55-llrrx\" (UID: \"8b6e28a6-b1a9-4942-8457-e54258393016\") " pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.123267 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ffd4ccf2-5090-485f-8b42-ca4c2c6f293d-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-kb5vt\" (UID: \"ffd4ccf2-5090-485f-8b42-ca4c2c6f293d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kb5vt" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.123314 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8b6e28a6-b1a9-4942-8457-e54258393016-node-pullsecrets\") pod \"apiserver-9ddfb9f55-llrrx\" (UID: \"8b6e28a6-b1a9-4942-8457-e54258393016\") " pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.123376 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8b6e28a6-b1a9-4942-8457-e54258393016-image-import-ca\") pod \"apiserver-9ddfb9f55-llrrx\" (UID: \"8b6e28a6-b1a9-4942-8457-e54258393016\") " pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.123407 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1703270b-65b8-4361-a26e-f6b5475b01d0-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-m26t8\" (UID: \"1703270b-65b8-4361-a26e-f6b5475b01d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m26t8" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.123812 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-zqx8l"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.124049 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.124063 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wbclb" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.124224 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.124743 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.125224 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.125533 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.125656 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.123430 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/44f3c5cd-6bfe-4b81-b822-bfb31ef6e223-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-b4cz4\" (UID: \"44f3c5cd-6bfe-4b81-b822-bfb31ef6e223\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-b4cz4" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.129407 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4064ac8e-a335-40db-a1d6-38f9e8838fbf-etcd-serving-ca\") pod \"apiserver-8596bd845d-bdmmp\" (UID: \"4064ac8e-a335-40db-a1d6-38f9e8838fbf\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.129448 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4064ac8e-a335-40db-a1d6-38f9e8838fbf-trusted-ca-bundle\") pod \"apiserver-8596bd845d-bdmmp\" (UID: \"4064ac8e-a335-40db-a1d6-38f9e8838fbf\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.129474 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tcgm\" (UniqueName: \"kubernetes.io/projected/4064ac8e-a335-40db-a1d6-38f9e8838fbf-kube-api-access-9tcgm\") pod \"apiserver-8596bd845d-bdmmp\" (UID: \"4064ac8e-a335-40db-a1d6-38f9e8838fbf\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.133018 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.133045 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.133298 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.133366 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.133663 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jzw4f"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.134757 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-zqx8l" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.136466 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-tsm29"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.140429 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-lwbkt"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.140830 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.140880 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.140993 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jzw4f" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.141165 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-tsm29" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.143156 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-59hqn"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.143335 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-lwbkt" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.147683 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.148311 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.148580 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.148811 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.149011 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.149295 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.149520 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.149724 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.149905 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.149954 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.150139 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.150330 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.151188 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kb5vt"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.151219 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-qxtmf"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.156040 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-db5ff"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.157152 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.159777 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-dsvk5"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.160463 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-59hqn" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.165019 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wx9kv"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.165801 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.166142 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.166354 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.166644 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.166883 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.167126 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.167235 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.167383 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.167460 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.167489 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.167584 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.167752 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.167987 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.167616 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.166146 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-dsvk5" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.167596 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.174939 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-2tbm6"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.176010 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wx9kv" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.190131 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-nrpbd"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.192386 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.194597 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.196802 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-v9phm"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.198808 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.200069 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.200764 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-db5ff" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.204255 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.204335 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.204265 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.205417 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.207307 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-j6t46"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.207822 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nrpbd" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.226858 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.227391 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-v9phm" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.228565 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.228767 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.229548 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.230492 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b6e28a6-b1a9-4942-8457-e54258393016-serving-cert\") pod \"apiserver-9ddfb9f55-llrrx\" (UID: \"8b6e28a6-b1a9-4942-8457-e54258393016\") " pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.230520 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n77jx\" (UniqueName: \"kubernetes.io/projected/9d8735f9-6304-4571-a4ef-490336afe153-kube-api-access-n77jx\") pod \"machine-config-operator-67c9d58cbb-mr6mk\" (UID: \"9d8735f9-6304-4571-a4ef-490336afe153\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mr6mk" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.230545 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8b6e28a6-b1a9-4942-8457-e54258393016-encryption-config\") pod \"apiserver-9ddfb9f55-llrrx\" (UID: \"8b6e28a6-b1a9-4942-8457-e54258393016\") " pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.230561 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4064ac8e-a335-40db-a1d6-38f9e8838fbf-audit-dir\") pod \"apiserver-8596bd845d-bdmmp\" (UID: \"4064ac8e-a335-40db-a1d6-38f9e8838fbf\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.230576 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9d8735f9-6304-4571-a4ef-490336afe153-images\") pod \"machine-config-operator-67c9d58cbb-mr6mk\" (UID: \"9d8735f9-6304-4571-a4ef-490336afe153\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mr6mk" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.230591 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8b6e28a6-b1a9-4942-8457-e54258393016-etcd-client\") pod \"apiserver-9ddfb9f55-llrrx\" (UID: \"8b6e28a6-b1a9-4942-8457-e54258393016\") " pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.230608 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ffd4ccf2-5090-485f-8b42-ca4c2c6f293d-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-kb5vt\" (UID: \"ffd4ccf2-5090-485f-8b42-ca4c2c6f293d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kb5vt" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.230625 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb1fe217-9de4-455e-80e1-dd01805e7935-config\") pod \"openshift-kube-scheduler-operator-54f497555d-wbclb\" (UID: \"cb1fe217-9de4-455e-80e1-dd01805e7935\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wbclb" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.230641 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m2vlx\" (UniqueName: \"kubernetes.io/projected/1703270b-65b8-4361-a26e-f6b5475b01d0-kube-api-access-m2vlx\") pod \"openshift-controller-manager-operator-686468bdd5-m26t8\" (UID: \"1703270b-65b8-4361-a26e-f6b5475b01d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m26t8" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.230657 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4064ac8e-a335-40db-a1d6-38f9e8838fbf-serving-cert\") pod \"apiserver-8596bd845d-bdmmp\" (UID: \"4064ac8e-a335-40db-a1d6-38f9e8838fbf\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.230681 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvbw2\" (UniqueName: \"kubernetes.io/projected/6f642643-9482-4e17-b0f7-bd7bf530f5a1-kube-api-access-rvbw2\") pod \"migrator-866fcbc849-288ln\" (UID: \"6f642643-9482-4e17-b0f7-bd7bf530f5a1\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-288ln" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.230696 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb1fe217-9de4-455e-80e1-dd01805e7935-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-wbclb\" (UID: \"cb1fe217-9de4-455e-80e1-dd01805e7935\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wbclb" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.230722 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8b6e28a6-b1a9-4942-8457-e54258393016-audit-dir\") pod \"apiserver-9ddfb9f55-llrrx\" (UID: \"8b6e28a6-b1a9-4942-8457-e54258393016\") " pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.230737 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb1fe217-9de4-455e-80e1-dd01805e7935-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-wbclb\" (UID: \"cb1fe217-9de4-455e-80e1-dd01805e7935\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wbclb" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.230753 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ffd4ccf2-5090-485f-8b42-ca4c2c6f293d-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-kb5vt\" (UID: \"ffd4ccf2-5090-485f-8b42-ca4c2c6f293d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kb5vt" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.230771 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8b6e28a6-b1a9-4942-8457-e54258393016-node-pullsecrets\") pod \"apiserver-9ddfb9f55-llrrx\" (UID: \"8b6e28a6-b1a9-4942-8457-e54258393016\") " pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.230794 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d8735f9-6304-4571-a4ef-490336afe153-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-mr6mk\" (UID: \"9d8735f9-6304-4571-a4ef-490336afe153\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mr6mk" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.230809 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b32a5174-fc1f-4e6e-8173-414921f6d86f-stats-auth\") pod \"router-default-68cf44c8b8-57xp7\" (UID: \"b32a5174-fc1f-4e6e-8173-414921f6d86f\") " pod="openshift-ingress/router-default-68cf44c8b8-57xp7" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.230824 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pk7t\" (UniqueName: \"kubernetes.io/projected/b32a5174-fc1f-4e6e-8173-414921f6d86f-kube-api-access-7pk7t\") pod \"router-default-68cf44c8b8-57xp7\" (UID: \"b32a5174-fc1f-4e6e-8173-414921f6d86f\") " pod="openshift-ingress/router-default-68cf44c8b8-57xp7" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.230841 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8b6e28a6-b1a9-4942-8457-e54258393016-image-import-ca\") pod \"apiserver-9ddfb9f55-llrrx\" (UID: \"8b6e28a6-b1a9-4942-8457-e54258393016\") " pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.230859 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/826bf927-48e7-4696-92d0-748f01cdc1a8-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-x9hfx\" (UID: \"826bf927-48e7-4696-92d0-748f01cdc1a8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x9hfx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.230881 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1703270b-65b8-4361-a26e-f6b5475b01d0-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-m26t8\" (UID: \"1703270b-65b8-4361-a26e-f6b5475b01d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m26t8" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.230909 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d695\" (UniqueName: \"kubernetes.io/projected/d68dcc8d-b977-44e9-a63c-1cee775b50f2-kube-api-access-8d695\") pod \"downloads-747b44746d-7nbcs\" (UID: \"d68dcc8d-b977-44e9-a63c-1cee775b50f2\") " pod="openshift-console/downloads-747b44746d-7nbcs" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.230924 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b32a5174-fc1f-4e6e-8173-414921f6d86f-metrics-certs\") pod \"router-default-68cf44c8b8-57xp7\" (UID: \"b32a5174-fc1f-4e6e-8173-414921f6d86f\") " pod="openshift-ingress/router-default-68cf44c8b8-57xp7" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.230941 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/44f3c5cd-6bfe-4b81-b822-bfb31ef6e223-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-b4cz4\" (UID: \"44f3c5cd-6bfe-4b81-b822-bfb31ef6e223\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-b4cz4" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.230956 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4064ac8e-a335-40db-a1d6-38f9e8838fbf-etcd-serving-ca\") pod \"apiserver-8596bd845d-bdmmp\" (UID: \"4064ac8e-a335-40db-a1d6-38f9e8838fbf\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.230974 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4064ac8e-a335-40db-a1d6-38f9e8838fbf-trusted-ca-bundle\") pod \"apiserver-8596bd845d-bdmmp\" (UID: \"4064ac8e-a335-40db-a1d6-38f9e8838fbf\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.230988 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d8735f9-6304-4571-a4ef-490336afe153-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-mr6mk\" (UID: \"9d8735f9-6304-4571-a4ef-490336afe153\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mr6mk" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.231004 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b32a5174-fc1f-4e6e-8173-414921f6d86f-service-ca-bundle\") pod \"router-default-68cf44c8b8-57xp7\" (UID: \"b32a5174-fc1f-4e6e-8173-414921f6d86f\") " pod="openshift-ingress/router-default-68cf44c8b8-57xp7" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.231049 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9tcgm\" (UniqueName: \"kubernetes.io/projected/4064ac8e-a335-40db-a1d6-38f9e8838fbf-kube-api-access-9tcgm\") pod \"apiserver-8596bd845d-bdmmp\" (UID: \"4064ac8e-a335-40db-a1d6-38f9e8838fbf\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.231066 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4064ac8e-a335-40db-a1d6-38f9e8838fbf-encryption-config\") pod \"apiserver-8596bd845d-bdmmp\" (UID: \"4064ac8e-a335-40db-a1d6-38f9e8838fbf\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.231088 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b32a5174-fc1f-4e6e-8173-414921f6d86f-default-certificate\") pod \"router-default-68cf44c8b8-57xp7\" (UID: \"b32a5174-fc1f-4e6e-8173-414921f6d86f\") " pod="openshift-ingress/router-default-68cf44c8b8-57xp7" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.231109 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b6e28a6-b1a9-4942-8457-e54258393016-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-llrrx\" (UID: \"8b6e28a6-b1a9-4942-8457-e54258393016\") " pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.231123 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4064ac8e-a335-40db-a1d6-38f9e8838fbf-audit-policies\") pod \"apiserver-8596bd845d-bdmmp\" (UID: \"4064ac8e-a335-40db-a1d6-38f9e8838fbf\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.231138 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c19a2b06-50e9-4cb1-a04f-a495644f4cb1-trusted-ca\") pod \"console-operator-67c89758df-cfjf4\" (UID: \"c19a2b06-50e9-4cb1-a04f-a495644f4cb1\") " pod="openshift-console-operator/console-operator-67c89758df-cfjf4" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.231163 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8b6e28a6-b1a9-4942-8457-e54258393016-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-llrrx\" (UID: \"8b6e28a6-b1a9-4942-8457-e54258393016\") " pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.231180 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1703270b-65b8-4361-a26e-f6b5475b01d0-config\") pod \"openshift-controller-manager-operator-686468bdd5-m26t8\" (UID: \"1703270b-65b8-4361-a26e-f6b5475b01d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m26t8" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.231195 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/826bf927-48e7-4696-92d0-748f01cdc1a8-config\") pod \"openshift-apiserver-operator-846cbfc458-x9hfx\" (UID: \"826bf927-48e7-4696-92d0-748f01cdc1a8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x9hfx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.231209 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c19a2b06-50e9-4cb1-a04f-a495644f4cb1-serving-cert\") pod \"console-operator-67c89758df-cfjf4\" (UID: \"c19a2b06-50e9-4cb1-a04f-a495644f4cb1\") " pod="openshift-console-operator/console-operator-67c89758df-cfjf4" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.231228 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffd4ccf2-5090-485f-8b42-ca4c2c6f293d-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-kb5vt\" (UID: \"ffd4ccf2-5090-485f-8b42-ca4c2c6f293d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kb5vt" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.231243 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffd4ccf2-5090-485f-8b42-ca4c2c6f293d-config\") pod \"kube-controller-manager-operator-69d5f845f8-kb5vt\" (UID: \"ffd4ccf2-5090-485f-8b42-ca4c2c6f293d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kb5vt" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.231269 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c19a2b06-50e9-4cb1-a04f-a495644f4cb1-config\") pod \"console-operator-67c89758df-cfjf4\" (UID: \"c19a2b06-50e9-4cb1-a04f-a495644f4cb1\") " pod="openshift-console-operator/console-operator-67c89758df-cfjf4" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.231309 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwbqb\" (UniqueName: \"kubernetes.io/projected/826bf927-48e7-4696-92d0-748f01cdc1a8-kube-api-access-lwbqb\") pod \"openshift-apiserver-operator-846cbfc458-x9hfx\" (UID: \"826bf927-48e7-4696-92d0-748f01cdc1a8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x9hfx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.231339 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4064ac8e-a335-40db-a1d6-38f9e8838fbf-etcd-client\") pod \"apiserver-8596bd845d-bdmmp\" (UID: \"4064ac8e-a335-40db-a1d6-38f9e8838fbf\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.231355 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8b6e28a6-b1a9-4942-8457-e54258393016-audit\") pod \"apiserver-9ddfb9f55-llrrx\" (UID: \"8b6e28a6-b1a9-4942-8457-e54258393016\") " pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.231378 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b6e28a6-b1a9-4942-8457-e54258393016-config\") pod \"apiserver-9ddfb9f55-llrrx\" (UID: \"8b6e28a6-b1a9-4942-8457-e54258393016\") " pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.231392 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qdfvl\" (UniqueName: \"kubernetes.io/projected/44f3c5cd-6bfe-4b81-b822-bfb31ef6e223-kube-api-access-qdfvl\") pod \"cluster-samples-operator-6b564684c8-b4cz4\" (UID: \"44f3c5cd-6bfe-4b81-b822-bfb31ef6e223\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-b4cz4" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.231410 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k8kd9\" (UniqueName: \"kubernetes.io/projected/8b6e28a6-b1a9-4942-8457-e54258393016-kube-api-access-k8kd9\") pod \"apiserver-9ddfb9f55-llrrx\" (UID: \"8b6e28a6-b1a9-4942-8457-e54258393016\") " pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.231425 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cb1fe217-9de4-455e-80e1-dd01805e7935-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-wbclb\" (UID: \"cb1fe217-9de4-455e-80e1-dd01805e7935\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wbclb" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.231455 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1703270b-65b8-4361-a26e-f6b5475b01d0-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-m26t8\" (UID: \"1703270b-65b8-4361-a26e-f6b5475b01d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m26t8" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.231476 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9rpp\" (UniqueName: \"kubernetes.io/projected/c19a2b06-50e9-4cb1-a04f-a495644f4cb1-kube-api-access-l9rpp\") pod \"console-operator-67c89758df-cfjf4\" (UID: \"c19a2b06-50e9-4cb1-a04f-a495644f4cb1\") " pod="openshift-console-operator/console-operator-67c89758df-cfjf4" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.231545 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4064ac8e-a335-40db-a1d6-38f9e8838fbf-audit-dir\") pod \"apiserver-8596bd845d-bdmmp\" (UID: \"4064ac8e-a335-40db-a1d6-38f9e8838fbf\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.231725 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.232122 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ffd4ccf2-5090-485f-8b42-ca4c2c6f293d-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-kb5vt\" (UID: \"ffd4ccf2-5090-485f-8b42-ca4c2c6f293d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kb5vt" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.230607 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.234732 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fv4w"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.235260 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8b6e28a6-b1a9-4942-8457-e54258393016-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-llrrx\" (UID: \"8b6e28a6-b1a9-4942-8457-e54258393016\") " pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.236367 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8b6e28a6-b1a9-4942-8457-e54258393016-node-pullsecrets\") pod \"apiserver-9ddfb9f55-llrrx\" (UID: \"8b6e28a6-b1a9-4942-8457-e54258393016\") " pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.236638 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8b6e28a6-b1a9-4942-8457-e54258393016-audit-dir\") pod \"apiserver-9ddfb9f55-llrrx\" (UID: \"8b6e28a6-b1a9-4942-8457-e54258393016\") " pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.237562 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8b6e28a6-b1a9-4942-8457-e54258393016-audit\") pod \"apiserver-9ddfb9f55-llrrx\" (UID: \"8b6e28a6-b1a9-4942-8457-e54258393016\") " pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.237866 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b6e28a6-b1a9-4942-8457-e54258393016-config\") pod \"apiserver-9ddfb9f55-llrrx\" (UID: \"8b6e28a6-b1a9-4942-8457-e54258393016\") " pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.238016 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8b6e28a6-b1a9-4942-8457-e54258393016-etcd-client\") pod \"apiserver-9ddfb9f55-llrrx\" (UID: \"8b6e28a6-b1a9-4942-8457-e54258393016\") " pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.238679 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8b6e28a6-b1a9-4942-8457-e54258393016-image-import-ca\") pod \"apiserver-9ddfb9f55-llrrx\" (UID: \"8b6e28a6-b1a9-4942-8457-e54258393016\") " pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.238830 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4064ac8e-a335-40db-a1d6-38f9e8838fbf-audit-policies\") pod \"apiserver-8596bd845d-bdmmp\" (UID: \"4064ac8e-a335-40db-a1d6-38f9e8838fbf\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.240214 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.240439 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-wpjqd"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.241332 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fv4w" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.243445 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4064ac8e-a335-40db-a1d6-38f9e8838fbf-etcd-serving-ca\") pod \"apiserver-8596bd845d-bdmmp\" (UID: \"4064ac8e-a335-40db-a1d6-38f9e8838fbf\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.243590 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1703270b-65b8-4361-a26e-f6b5475b01d0-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-m26t8\" (UID: \"1703270b-65b8-4361-a26e-f6b5475b01d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m26t8" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.243600 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffd4ccf2-5090-485f-8b42-ca4c2c6f293d-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-kb5vt\" (UID: \"ffd4ccf2-5090-485f-8b42-ca4c2c6f293d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kb5vt" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.244164 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4064ac8e-a335-40db-a1d6-38f9e8838fbf-trusted-ca-bundle\") pod \"apiserver-8596bd845d-bdmmp\" (UID: \"4064ac8e-a335-40db-a1d6-38f9e8838fbf\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.244397 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.244526 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1703270b-65b8-4361-a26e-f6b5475b01d0-config\") pod \"openshift-controller-manager-operator-686468bdd5-m26t8\" (UID: \"1703270b-65b8-4361-a26e-f6b5475b01d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m26t8" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.246847 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dslgq"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.247888 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4064ac8e-a335-40db-a1d6-38f9e8838fbf-encryption-config\") pod \"apiserver-8596bd845d-bdmmp\" (UID: \"4064ac8e-a335-40db-a1d6-38f9e8838fbf\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.247916 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4064ac8e-a335-40db-a1d6-38f9e8838fbf-serving-cert\") pod \"apiserver-8596bd845d-bdmmp\" (UID: \"4064ac8e-a335-40db-a1d6-38f9e8838fbf\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.247994 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-j6t46" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.248158 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/44f3c5cd-6bfe-4b81-b822-bfb31ef6e223-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-b4cz4\" (UID: \"44f3c5cd-6bfe-4b81-b822-bfb31ef6e223\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-b4cz4" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.249765 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b6e28a6-b1a9-4942-8457-e54258393016-serving-cert\") pod \"apiserver-9ddfb9f55-llrrx\" (UID: \"8b6e28a6-b1a9-4942-8457-e54258393016\") " pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.252113 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8b6e28a6-b1a9-4942-8457-e54258393016-encryption-config\") pod \"apiserver-9ddfb9f55-llrrx\" (UID: \"8b6e28a6-b1a9-4942-8457-e54258393016\") " pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.253535 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-w2skq"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.253612 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dslgq" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.253640 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-wpjqd" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.253662 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1703270b-65b8-4361-a26e-f6b5475b01d0-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-m26t8\" (UID: \"1703270b-65b8-4361-a26e-f6b5475b01d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m26t8" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.253826 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4064ac8e-a335-40db-a1d6-38f9e8838fbf-etcd-client\") pod \"apiserver-8596bd845d-bdmmp\" (UID: \"4064ac8e-a335-40db-a1d6-38f9e8838fbf\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.259056 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-d6hj2"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.259217 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-w2skq" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.261901 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-2bl74"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.262243 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-d6hj2" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.261940 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.264973 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-vxnbb"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.265133 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-2bl74" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.268485 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-lskwt"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.269210 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b6e28a6-b1a9-4942-8457-e54258393016-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-llrrx\" (UID: \"8b6e28a6-b1a9-4942-8457-e54258393016\") " pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.268678 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-vxnbb" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.274346 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.275909 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423025-zw42q"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.276000 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-lskwt" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.281194 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-b4cz4"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.281228 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-gzfvl"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.281370 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-zw42q" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.282554 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.283881 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-z72bq"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.288482 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-8bnxz"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.288713 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gzfvl" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.288774 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z72bq" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.291022 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-wp2cx"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.291573 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8bnxz" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.293334 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-7nbcs"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.293356 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.293366 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-cfjf4"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.293376 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-288ln"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.293385 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wbclb"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.293394 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-zqx8l"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.293403 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-gg274"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.295996 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-dnk6l"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.296069 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-wp2cx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.296123 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-gg274" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.300482 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jzw4f"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.300502 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m26t8"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.300534 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-hxjhm"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.300652 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-dnk6l" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.305220 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.305450 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x9hfx"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.305471 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-55xzh"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.305759 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-hxjhm" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.308149 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mr6mk"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.308169 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-tsm29"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.308180 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-dsvk5"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.308194 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wx9kv"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.308202 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-2tbm6"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.308211 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-d6hj2"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.308218 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-59hqn"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.308229 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-qxtmf"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.308237 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-llrrx"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.308244 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-nrpbd"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.308252 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-wpjqd"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.308261 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fv4w"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.308280 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-w2skq"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.308303 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-55xzh" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.308378 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-lskwt"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.309605 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-2bl74"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.310718 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-j6t46"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.312035 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dslgq"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.313051 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-vxnbb"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.314052 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-db5ff"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.320632 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-v9phm"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.320817 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-z72bq"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.321881 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-j45nf"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.325407 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-wp2cx"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.325428 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423025-zw42q"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.325505 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-j45nf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.325508 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-8bnxz"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.326406 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-gzfvl"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.327358 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-gg274"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.328242 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-j45nf"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.328299 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.329343 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-dnk6l"] Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.331939 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a2a5e5-1a13-4e0d-81a7-868716149070-serving-cert\") pod \"authentication-operator-7f5c659b84-nrpbd\" (UID: \"81a2a5e5-1a13-4e0d-81a7-868716149070\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nrpbd" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.331967 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qjj9\" (UniqueName: \"kubernetes.io/projected/0342172d-59ba-477b-8044-ed71dabb4eed-kube-api-access-7qjj9\") pod \"multus-admission-controller-69db94689b-zqx8l\" (UID: \"0342172d-59ba-477b-8044-ed71dabb4eed\") " pod="openshift-multus/multus-admission-controller-69db94689b-zqx8l" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332010 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cb1fe217-9de4-455e-80e1-dd01805e7935-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-wbclb\" (UID: \"cb1fe217-9de4-455e-80e1-dd01805e7935\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wbclb" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332048 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64fd8522-fc45-4417-8a06-59b34f001433-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-4fv4w\" (UID: \"64fd8522-fc45-4417-8a06-59b34f001433\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fv4w" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332074 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l9rpp\" (UniqueName: \"kubernetes.io/projected/c19a2b06-50e9-4cb1-a04f-a495644f4cb1-kube-api-access-l9rpp\") pod \"console-operator-67c89758df-cfjf4\" (UID: \"c19a2b06-50e9-4cb1-a04f-a495644f4cb1\") " pod="openshift-console-operator/console-operator-67c89758df-cfjf4" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332185 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8803937b-0d28-40bc-bdb9-12ea0b8d003c-audit-dir\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332227 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a2a5e5-1a13-4e0d-81a7-868716149070-config\") pod \"authentication-operator-7f5c659b84-nrpbd\" (UID: \"81a2a5e5-1a13-4e0d-81a7-868716149070\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nrpbd" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332299 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n77jx\" (UniqueName: \"kubernetes.io/projected/9d8735f9-6304-4571-a4ef-490336afe153-kube-api-access-n77jx\") pod \"machine-config-operator-67c9d58cbb-mr6mk\" (UID: \"9d8735f9-6304-4571-a4ef-490336afe153\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mr6mk" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332333 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c910b353-a094-46d1-9980-657a309d9050-etcd-client\") pod \"etcd-operator-69b85846b6-tsm29\" (UID: \"c910b353-a094-46d1-9980-657a309d9050\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-tsm29" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332381 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9d8735f9-6304-4571-a4ef-490336afe153-images\") pod \"machine-config-operator-67c9d58cbb-mr6mk\" (UID: \"9d8735f9-6304-4571-a4ef-490336afe153\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mr6mk" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332384 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cb1fe217-9de4-455e-80e1-dd01805e7935-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-wbclb\" (UID: \"cb1fe217-9de4-455e-80e1-dd01805e7935\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wbclb" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332406 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb1fe217-9de4-455e-80e1-dd01805e7935-config\") pod \"openshift-kube-scheduler-operator-54f497555d-wbclb\" (UID: \"cb1fe217-9de4-455e-80e1-dd01805e7935\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wbclb" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332425 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rvbw2\" (UniqueName: \"kubernetes.io/projected/6f642643-9482-4e17-b0f7-bd7bf530f5a1-kube-api-access-rvbw2\") pod \"migrator-866fcbc849-288ln\" (UID: \"6f642643-9482-4e17-b0f7-bd7bf530f5a1\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-288ln" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332444 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb1fe217-9de4-455e-80e1-dd01805e7935-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-wbclb\" (UID: \"cb1fe217-9de4-455e-80e1-dd01805e7935\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wbclb" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332466 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb1fe217-9de4-455e-80e1-dd01805e7935-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-wbclb\" (UID: \"cb1fe217-9de4-455e-80e1-dd01805e7935\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wbclb" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332486 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332503 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332542 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d8735f9-6304-4571-a4ef-490336afe153-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-mr6mk\" (UID: \"9d8735f9-6304-4571-a4ef-490336afe153\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mr6mk" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332560 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b32a5174-fc1f-4e6e-8173-414921f6d86f-stats-auth\") pod \"router-default-68cf44c8b8-57xp7\" (UID: \"b32a5174-fc1f-4e6e-8173-414921f6d86f\") " pod="openshift-ingress/router-default-68cf44c8b8-57xp7" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332576 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7pk7t\" (UniqueName: \"kubernetes.io/projected/b32a5174-fc1f-4e6e-8173-414921f6d86f-kube-api-access-7pk7t\") pod \"router-default-68cf44c8b8-57xp7\" (UID: \"b32a5174-fc1f-4e6e-8173-414921f6d86f\") " pod="openshift-ingress/router-default-68cf44c8b8-57xp7" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332593 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332622 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/826bf927-48e7-4696-92d0-748f01cdc1a8-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-x9hfx\" (UID: \"826bf927-48e7-4696-92d0-748f01cdc1a8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x9hfx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332640 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332665 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8d695\" (UniqueName: \"kubernetes.io/projected/d68dcc8d-b977-44e9-a63c-1cee775b50f2-kube-api-access-8d695\") pod \"downloads-747b44746d-7nbcs\" (UID: \"d68dcc8d-b977-44e9-a63c-1cee775b50f2\") " pod="openshift-console/downloads-747b44746d-7nbcs" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332681 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b32a5174-fc1f-4e6e-8173-414921f6d86f-metrics-certs\") pod \"router-default-68cf44c8b8-57xp7\" (UID: \"b32a5174-fc1f-4e6e-8173-414921f6d86f\") " pod="openshift-ingress/router-default-68cf44c8b8-57xp7" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332700 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332726 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/00c50168-1c40-4c3d-9a03-c99c13223df8-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-wx9kv\" (UID: \"00c50168-1c40-4c3d-9a03-c99c13223df8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wx9kv" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332742 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64fd8522-fc45-4417-8a06-59b34f001433-config\") pod \"kube-storage-version-migrator-operator-565b79b866-4fv4w\" (UID: \"64fd8522-fc45-4417-8a06-59b34f001433\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fv4w" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332758 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c910b353-a094-46d1-9980-657a309d9050-serving-cert\") pod \"etcd-operator-69b85846b6-tsm29\" (UID: \"c910b353-a094-46d1-9980-657a309d9050\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-tsm29" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332781 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg7wd\" (UniqueName: \"kubernetes.io/projected/8803937b-0d28-40bc-bdb9-12ea0b8d003c-kube-api-access-bg7wd\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332823 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81a2a5e5-1a13-4e0d-81a7-868716149070-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-nrpbd\" (UID: \"81a2a5e5-1a13-4e0d-81a7-868716149070\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nrpbd" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332862 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c910b353-a094-46d1-9980-657a309d9050-tmp-dir\") pod \"etcd-operator-69b85846b6-tsm29\" (UID: \"c910b353-a094-46d1-9980-657a309d9050\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-tsm29" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332886 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqd4n\" (UniqueName: \"kubernetes.io/projected/c910b353-a094-46d1-9980-657a309d9050-kube-api-access-mqd4n\") pod \"etcd-operator-69b85846b6-tsm29\" (UID: \"c910b353-a094-46d1-9980-657a309d9050\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-tsm29" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332914 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d8735f9-6304-4571-a4ef-490336afe153-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-mr6mk\" (UID: \"9d8735f9-6304-4571-a4ef-490336afe153\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mr6mk" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332928 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9d8735f9-6304-4571-a4ef-490336afe153-images\") pod \"machine-config-operator-67c9d58cbb-mr6mk\" (UID: \"9d8735f9-6304-4571-a4ef-490336afe153\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mr6mk" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332930 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/00c50168-1c40-4c3d-9a03-c99c13223df8-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-wx9kv\" (UID: \"00c50168-1c40-4c3d-9a03-c99c13223df8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wx9kv" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332971 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b5lc\" (UniqueName: \"kubernetes.io/projected/36137111-458a-4f99-bcbf-6606f80d8ee0-kube-api-access-2b5lc\") pod \"dns-operator-799b87ffcd-j6t46\" (UID: \"36137111-458a-4f99-bcbf-6606f80d8ee0\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-j6t46" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.332990 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b32a5174-fc1f-4e6e-8173-414921f6d86f-service-ca-bundle\") pod \"router-default-68cf44c8b8-57xp7\" (UID: \"b32a5174-fc1f-4e6e-8173-414921f6d86f\") " pod="openshift-ingress/router-default-68cf44c8b8-57xp7" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.333006 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.333034 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.333164 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b32a5174-fc1f-4e6e-8173-414921f6d86f-default-certificate\") pod \"router-default-68cf44c8b8-57xp7\" (UID: \"b32a5174-fc1f-4e6e-8173-414921f6d86f\") " pod="openshift-ingress/router-default-68cf44c8b8-57xp7" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.333182 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/00c50168-1c40-4c3d-9a03-c99c13223df8-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-wx9kv\" (UID: \"00c50168-1c40-4c3d-9a03-c99c13223df8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wx9kv" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.333200 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81a2a5e5-1a13-4e0d-81a7-868716149070-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-nrpbd\" (UID: \"81a2a5e5-1a13-4e0d-81a7-868716149070\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nrpbd" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.333221 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0342172d-59ba-477b-8044-ed71dabb4eed-webhook-certs\") pod \"multus-admission-controller-69db94689b-zqx8l\" (UID: \"0342172d-59ba-477b-8044-ed71dabb4eed\") " pod="openshift-multus/multus-admission-controller-69db94689b-zqx8l" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.333249 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c19a2b06-50e9-4cb1-a04f-a495644f4cb1-trusted-ca\") pod \"console-operator-67c89758df-cfjf4\" (UID: \"c19a2b06-50e9-4cb1-a04f-a495644f4cb1\") " pod="openshift-console-operator/console-operator-67c89758df-cfjf4" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.333265 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c910b353-a094-46d1-9980-657a309d9050-config\") pod \"etcd-operator-69b85846b6-tsm29\" (UID: \"c910b353-a094-46d1-9980-657a309d9050\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-tsm29" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.333306 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/826bf927-48e7-4696-92d0-748f01cdc1a8-config\") pod \"openshift-apiserver-operator-846cbfc458-x9hfx\" (UID: \"826bf927-48e7-4696-92d0-748f01cdc1a8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x9hfx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.333322 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c19a2b06-50e9-4cb1-a04f-a495644f4cb1-serving-cert\") pod \"console-operator-67c89758df-cfjf4\" (UID: \"c19a2b06-50e9-4cb1-a04f-a495644f4cb1\") " pod="openshift-console-operator/console-operator-67c89758df-cfjf4" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.333341 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.333361 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45k7m\" (UniqueName: \"kubernetes.io/projected/00c50168-1c40-4c3d-9a03-c99c13223df8-kube-api-access-45k7m\") pod \"ingress-operator-6b9cb4dbcf-wx9kv\" (UID: \"00c50168-1c40-4c3d-9a03-c99c13223df8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wx9kv" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.333381 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c19a2b06-50e9-4cb1-a04f-a495644f4cb1-config\") pod \"console-operator-67c89758df-cfjf4\" (UID: \"c19a2b06-50e9-4cb1-a04f-a495644f4cb1\") " pod="openshift-console-operator/console-operator-67c89758df-cfjf4" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.333398 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2vsv\" (UniqueName: \"kubernetes.io/projected/81a2a5e5-1a13-4e0d-81a7-868716149070-kube-api-access-t2vsv\") pod \"authentication-operator-7f5c659b84-nrpbd\" (UID: \"81a2a5e5-1a13-4e0d-81a7-868716149070\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nrpbd" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.333416 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/36137111-458a-4f99-bcbf-6606f80d8ee0-metrics-tls\") pod \"dns-operator-799b87ffcd-j6t46\" (UID: \"36137111-458a-4f99-bcbf-6606f80d8ee0\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-j6t46" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.333440 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lwbqb\" (UniqueName: \"kubernetes.io/projected/826bf927-48e7-4696-92d0-748f01cdc1a8-kube-api-access-lwbqb\") pod \"openshift-apiserver-operator-846cbfc458-x9hfx\" (UID: \"826bf927-48e7-4696-92d0-748f01cdc1a8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x9hfx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.333453 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d8735f9-6304-4571-a4ef-490336afe153-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-mr6mk\" (UID: \"9d8735f9-6304-4571-a4ef-490336afe153\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mr6mk" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.333481 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmtsb\" (UniqueName: \"kubernetes.io/projected/64fd8522-fc45-4417-8a06-59b34f001433-kube-api-access-gmtsb\") pod \"kube-storage-version-migrator-operator-565b79b866-4fv4w\" (UID: \"64fd8522-fc45-4417-8a06-59b34f001433\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fv4w" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.333503 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c910b353-a094-46d1-9980-657a309d9050-etcd-ca\") pod \"etcd-operator-69b85846b6-tsm29\" (UID: \"c910b353-a094-46d1-9980-657a309d9050\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-tsm29" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.333529 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8803937b-0d28-40bc-bdb9-12ea0b8d003c-audit-policies\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.333565 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.333581 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c910b353-a094-46d1-9980-657a309d9050-etcd-service-ca\") pod \"etcd-operator-69b85846b6-tsm29\" (UID: \"c910b353-a094-46d1-9980-657a309d9050\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-tsm29" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.333598 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.333614 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.333628 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/36137111-458a-4f99-bcbf-6606f80d8ee0-tmp-dir\") pod \"dns-operator-799b87ffcd-j6t46\" (UID: \"36137111-458a-4f99-bcbf-6606f80d8ee0\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-j6t46" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.334000 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b32a5174-fc1f-4e6e-8173-414921f6d86f-service-ca-bundle\") pod \"router-default-68cf44c8b8-57xp7\" (UID: \"b32a5174-fc1f-4e6e-8173-414921f6d86f\") " pod="openshift-ingress/router-default-68cf44c8b8-57xp7" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.334114 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/826bf927-48e7-4696-92d0-748f01cdc1a8-config\") pod \"openshift-apiserver-operator-846cbfc458-x9hfx\" (UID: \"826bf927-48e7-4696-92d0-748f01cdc1a8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x9hfx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.335605 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/826bf927-48e7-4696-92d0-748f01cdc1a8-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-x9hfx\" (UID: \"826bf927-48e7-4696-92d0-748f01cdc1a8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x9hfx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.335995 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b32a5174-fc1f-4e6e-8173-414921f6d86f-metrics-certs\") pod \"router-default-68cf44c8b8-57xp7\" (UID: \"b32a5174-fc1f-4e6e-8173-414921f6d86f\") " pod="openshift-ingress/router-default-68cf44c8b8-57xp7" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.336386 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c19a2b06-50e9-4cb1-a04f-a495644f4cb1-config\") pod \"console-operator-67c89758df-cfjf4\" (UID: \"c19a2b06-50e9-4cb1-a04f-a495644f4cb1\") " pod="openshift-console-operator/console-operator-67c89758df-cfjf4" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.336758 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb1fe217-9de4-455e-80e1-dd01805e7935-config\") pod \"openshift-kube-scheduler-operator-54f497555d-wbclb\" (UID: \"cb1fe217-9de4-455e-80e1-dd01805e7935\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wbclb" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.336986 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb1fe217-9de4-455e-80e1-dd01805e7935-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-wbclb\" (UID: \"cb1fe217-9de4-455e-80e1-dd01805e7935\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wbclb" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.337143 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c19a2b06-50e9-4cb1-a04f-a495644f4cb1-serving-cert\") pod \"console-operator-67c89758df-cfjf4\" (UID: \"c19a2b06-50e9-4cb1-a04f-a495644f4cb1\") " pod="openshift-console-operator/console-operator-67c89758df-cfjf4" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.337179 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b32a5174-fc1f-4e6e-8173-414921f6d86f-stats-auth\") pod \"router-default-68cf44c8b8-57xp7\" (UID: \"b32a5174-fc1f-4e6e-8173-414921f6d86f\") " pod="openshift-ingress/router-default-68cf44c8b8-57xp7" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.337218 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d8735f9-6304-4571-a4ef-490336afe153-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-mr6mk\" (UID: \"9d8735f9-6304-4571-a4ef-490336afe153\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mr6mk" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.338770 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b32a5174-fc1f-4e6e-8173-414921f6d86f-default-certificate\") pod \"router-default-68cf44c8b8-57xp7\" (UID: \"b32a5174-fc1f-4e6e-8173-414921f6d86f\") " pod="openshift-ingress/router-default-68cf44c8b8-57xp7" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.341758 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.337955 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c19a2b06-50e9-4cb1-a04f-a495644f4cb1-trusted-ca\") pod \"console-operator-67c89758df-cfjf4\" (UID: \"c19a2b06-50e9-4cb1-a04f-a495644f4cb1\") " pod="openshift-console-operator/console-operator-67c89758df-cfjf4" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.361656 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.383558 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.401962 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.421177 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.434305 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a2a5e5-1a13-4e0d-81a7-868716149070-serving-cert\") pod \"authentication-operator-7f5c659b84-nrpbd\" (UID: \"81a2a5e5-1a13-4e0d-81a7-868716149070\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nrpbd" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.434335 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7qjj9\" (UniqueName: \"kubernetes.io/projected/0342172d-59ba-477b-8044-ed71dabb4eed-kube-api-access-7qjj9\") pod \"multus-admission-controller-69db94689b-zqx8l\" (UID: \"0342172d-59ba-477b-8044-ed71dabb4eed\") " pod="openshift-multus/multus-admission-controller-69db94689b-zqx8l" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.434359 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64fd8522-fc45-4417-8a06-59b34f001433-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-4fv4w\" (UID: \"64fd8522-fc45-4417-8a06-59b34f001433\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fv4w" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.434380 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8803937b-0d28-40bc-bdb9-12ea0b8d003c-audit-dir\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.434395 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a2a5e5-1a13-4e0d-81a7-868716149070-config\") pod \"authentication-operator-7f5c659b84-nrpbd\" (UID: \"81a2a5e5-1a13-4e0d-81a7-868716149070\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nrpbd" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.434414 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c910b353-a094-46d1-9980-657a309d9050-etcd-client\") pod \"etcd-operator-69b85846b6-tsm29\" (UID: \"c910b353-a094-46d1-9980-657a309d9050\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-tsm29" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.434447 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.434475 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.434505 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.434526 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.434544 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.434560 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/00c50168-1c40-4c3d-9a03-c99c13223df8-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-wx9kv\" (UID: \"00c50168-1c40-4c3d-9a03-c99c13223df8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wx9kv" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.434575 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64fd8522-fc45-4417-8a06-59b34f001433-config\") pod \"kube-storage-version-migrator-operator-565b79b866-4fv4w\" (UID: \"64fd8522-fc45-4417-8a06-59b34f001433\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fv4w" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.434590 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c910b353-a094-46d1-9980-657a309d9050-serving-cert\") pod \"etcd-operator-69b85846b6-tsm29\" (UID: \"c910b353-a094-46d1-9980-657a309d9050\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-tsm29" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.434675 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8803937b-0d28-40bc-bdb9-12ea0b8d003c-audit-dir\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.434768 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bg7wd\" (UniqueName: \"kubernetes.io/projected/8803937b-0d28-40bc-bdb9-12ea0b8d003c-kube-api-access-bg7wd\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.434804 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81a2a5e5-1a13-4e0d-81a7-868716149070-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-nrpbd\" (UID: \"81a2a5e5-1a13-4e0d-81a7-868716149070\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nrpbd" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.434827 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c910b353-a094-46d1-9980-657a309d9050-tmp-dir\") pod \"etcd-operator-69b85846b6-tsm29\" (UID: \"c910b353-a094-46d1-9980-657a309d9050\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-tsm29" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.434875 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mqd4n\" (UniqueName: \"kubernetes.io/projected/c910b353-a094-46d1-9980-657a309d9050-kube-api-access-mqd4n\") pod \"etcd-operator-69b85846b6-tsm29\" (UID: \"c910b353-a094-46d1-9980-657a309d9050\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-tsm29" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.434983 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/00c50168-1c40-4c3d-9a03-c99c13223df8-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-wx9kv\" (UID: \"00c50168-1c40-4c3d-9a03-c99c13223df8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wx9kv" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.435046 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2b5lc\" (UniqueName: \"kubernetes.io/projected/36137111-458a-4f99-bcbf-6606f80d8ee0-kube-api-access-2b5lc\") pod \"dns-operator-799b87ffcd-j6t46\" (UID: \"36137111-458a-4f99-bcbf-6606f80d8ee0\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-j6t46" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.435093 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.435134 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.435195 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/00c50168-1c40-4c3d-9a03-c99c13223df8-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-wx9kv\" (UID: \"00c50168-1c40-4c3d-9a03-c99c13223df8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wx9kv" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.435229 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81a2a5e5-1a13-4e0d-81a7-868716149070-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-nrpbd\" (UID: \"81a2a5e5-1a13-4e0d-81a7-868716149070\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nrpbd" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.435264 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0342172d-59ba-477b-8044-ed71dabb4eed-webhook-certs\") pod \"multus-admission-controller-69db94689b-zqx8l\" (UID: \"0342172d-59ba-477b-8044-ed71dabb4eed\") " pod="openshift-multus/multus-admission-controller-69db94689b-zqx8l" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.435384 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c910b353-a094-46d1-9980-657a309d9050-config\") pod \"etcd-operator-69b85846b6-tsm29\" (UID: \"c910b353-a094-46d1-9980-657a309d9050\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-tsm29" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.435457 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.435518 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-45k7m\" (UniqueName: \"kubernetes.io/projected/00c50168-1c40-4c3d-9a03-c99c13223df8-kube-api-access-45k7m\") pod \"ingress-operator-6b9cb4dbcf-wx9kv\" (UID: \"00c50168-1c40-4c3d-9a03-c99c13223df8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wx9kv" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.435564 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t2vsv\" (UniqueName: \"kubernetes.io/projected/81a2a5e5-1a13-4e0d-81a7-868716149070-kube-api-access-t2vsv\") pod \"authentication-operator-7f5c659b84-nrpbd\" (UID: \"81a2a5e5-1a13-4e0d-81a7-868716149070\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nrpbd" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.435601 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/36137111-458a-4f99-bcbf-6606f80d8ee0-metrics-tls\") pod \"dns-operator-799b87ffcd-j6t46\" (UID: \"36137111-458a-4f99-bcbf-6606f80d8ee0\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-j6t46" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.435613 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c910b353-a094-46d1-9980-657a309d9050-tmp-dir\") pod \"etcd-operator-69b85846b6-tsm29\" (UID: \"c910b353-a094-46d1-9980-657a309d9050\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-tsm29" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.435787 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gmtsb\" (UniqueName: \"kubernetes.io/projected/64fd8522-fc45-4417-8a06-59b34f001433-kube-api-access-gmtsb\") pod \"kube-storage-version-migrator-operator-565b79b866-4fv4w\" (UID: \"64fd8522-fc45-4417-8a06-59b34f001433\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fv4w" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.435970 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c910b353-a094-46d1-9980-657a309d9050-etcd-ca\") pod \"etcd-operator-69b85846b6-tsm29\" (UID: \"c910b353-a094-46d1-9980-657a309d9050\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-tsm29" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.436550 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8803937b-0d28-40bc-bdb9-12ea0b8d003c-audit-policies\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.436591 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.436626 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c910b353-a094-46d1-9980-657a309d9050-etcd-service-ca\") pod \"etcd-operator-69b85846b6-tsm29\" (UID: \"c910b353-a094-46d1-9980-657a309d9050\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-tsm29" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.436671 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.436710 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.436748 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/36137111-458a-4f99-bcbf-6606f80d8ee0-tmp-dir\") pod \"dns-operator-799b87ffcd-j6t46\" (UID: \"36137111-458a-4f99-bcbf-6606f80d8ee0\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-j6t46" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.437151 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c910b353-a094-46d1-9980-657a309d9050-etcd-ca\") pod \"etcd-operator-69b85846b6-tsm29\" (UID: \"c910b353-a094-46d1-9980-657a309d9050\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-tsm29" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.437582 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c910b353-a094-46d1-9980-657a309d9050-etcd-service-ca\") pod \"etcd-operator-69b85846b6-tsm29\" (UID: \"c910b353-a094-46d1-9980-657a309d9050\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-tsm29" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.436706 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.437679 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.438056 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/00c50168-1c40-4c3d-9a03-c99c13223df8-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-wx9kv\" (UID: \"00c50168-1c40-4c3d-9a03-c99c13223df8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wx9kv" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.438170 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/36137111-458a-4f99-bcbf-6606f80d8ee0-tmp-dir\") pod \"dns-operator-799b87ffcd-j6t46\" (UID: \"36137111-458a-4f99-bcbf-6606f80d8ee0\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-j6t46" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.438336 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8803937b-0d28-40bc-bdb9-12ea0b8d003c-audit-policies\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.438512 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.438750 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c910b353-a094-46d1-9980-657a309d9050-config\") pod \"etcd-operator-69b85846b6-tsm29\" (UID: \"c910b353-a094-46d1-9980-657a309d9050\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-tsm29" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.439666 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.439738 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.440532 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.441878 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.442698 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.444485 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.444958 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/00c50168-1c40-4c3d-9a03-c99c13223df8-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-wx9kv\" (UID: \"00c50168-1c40-4c3d-9a03-c99c13223df8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wx9kv" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.445630 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c910b353-a094-46d1-9980-657a309d9050-serving-cert\") pod \"etcd-operator-69b85846b6-tsm29\" (UID: \"c910b353-a094-46d1-9980-657a309d9050\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-tsm29" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.446783 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.447335 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.448772 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.449033 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c910b353-a094-46d1-9980-657a309d9050-etcd-client\") pod \"etcd-operator-69b85846b6-tsm29\" (UID: \"c910b353-a094-46d1-9980-657a309d9050\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-tsm29" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.449386 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0342172d-59ba-477b-8044-ed71dabb4eed-webhook-certs\") pod \"multus-admission-controller-69db94689b-zqx8l\" (UID: \"0342172d-59ba-477b-8044-ed71dabb4eed\") " pod="openshift-multus/multus-admission-controller-69db94689b-zqx8l" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.462043 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.481955 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.501144 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.551240 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.555878 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a2a5e5-1a13-4e0d-81a7-868716149070-config\") pod \"authentication-operator-7f5c659b84-nrpbd\" (UID: \"81a2a5e5-1a13-4e0d-81a7-868716149070\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nrpbd" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.561630 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.567935 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.567962 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.568110 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.582063 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.601531 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.607631 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a2a5e5-1a13-4e0d-81a7-868716149070-serving-cert\") pod \"authentication-operator-7f5c659b84-nrpbd\" (UID: \"81a2a5e5-1a13-4e0d-81a7-868716149070\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nrpbd" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.627670 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.637166 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81a2a5e5-1a13-4e0d-81a7-868716149070-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-nrpbd\" (UID: \"81a2a5e5-1a13-4e0d-81a7-868716149070\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nrpbd" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.641231 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.645758 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81a2a5e5-1a13-4e0d-81a7-868716149070-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-nrpbd\" (UID: \"81a2a5e5-1a13-4e0d-81a7-868716149070\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nrpbd" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.661853 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.681231 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.702554 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.721564 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.740931 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.761659 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.797952 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2vlx\" (UniqueName: \"kubernetes.io/projected/1703270b-65b8-4361-a26e-f6b5475b01d0-kube-api-access-m2vlx\") pod \"openshift-controller-manager-operator-686468bdd5-m26t8\" (UID: \"1703270b-65b8-4361-a26e-f6b5475b01d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m26t8" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.820708 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tcgm\" (UniqueName: \"kubernetes.io/projected/4064ac8e-a335-40db-a1d6-38f9e8838fbf-kube-api-access-9tcgm\") pod \"apiserver-8596bd845d-bdmmp\" (UID: \"4064ac8e-a335-40db-a1d6-38f9e8838fbf\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.835412 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdfvl\" (UniqueName: \"kubernetes.io/projected/44f3c5cd-6bfe-4b81-b822-bfb31ef6e223-kube-api-access-qdfvl\") pod \"cluster-samples-operator-6b564684c8-b4cz4\" (UID: \"44f3c5cd-6bfe-4b81-b822-bfb31ef6e223\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-b4cz4" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.855487 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ffd4ccf2-5090-485f-8b42-ca4c2c6f293d-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-kb5vt\" (UID: \"ffd4ccf2-5090-485f-8b42-ca4c2c6f293d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kb5vt" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.876552 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8kd9\" (UniqueName: \"kubernetes.io/projected/8b6e28a6-b1a9-4942-8457-e54258393016-kube-api-access-k8kd9\") pod \"apiserver-9ddfb9f55-llrrx\" (UID: \"8b6e28a6-b1a9-4942-8457-e54258393016\") " pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.882255 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.896201 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.902122 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.906097 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64fd8522-fc45-4417-8a06-59b34f001433-config\") pod \"kube-storage-version-migrator-operator-565b79b866-4fv4w\" (UID: \"64fd8522-fc45-4417-8a06-59b34f001433\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fv4w" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.922588 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.925823 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m26t8" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.929753 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64fd8522-fc45-4417-8a06-59b34f001433-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-4fv4w\" (UID: \"64fd8522-fc45-4417-8a06-59b34f001433\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fv4w" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.938834 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-b4cz4" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.941681 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.961257 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 10 15:48:13 crc kubenswrapper[5114]: I1210 15:48:13.966209 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.001360 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.021417 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.029396 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/36137111-458a-4f99-bcbf-6606f80d8ee0-metrics-tls\") pod \"dns-operator-799b87ffcd-j6t46\" (UID: \"36137111-458a-4f99-bcbf-6606f80d8ee0\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-j6t46" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.041246 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.063689 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.065616 5114 generic.go:358] "Generic (PLEG): container finished" podID="3a3e165c-439d-4282-b1e7-179dca439343" containerID="15705bf8381a2874fab50db0062a3b8f12113886515990d587cf99cee925debb" exitCode=0 Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.065685 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wbl48" event={"ID":"3a3e165c-439d-4282-b1e7-179dca439343","Type":"ContainerDied","Data":"15705bf8381a2874fab50db0062a3b8f12113886515990d587cf99cee925debb"} Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.082214 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.103135 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.121708 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.142019 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.167655 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.189753 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.201388 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.222697 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 10 15:48:14 crc kubenswrapper[5114]: E1210 15:48:14.234502 5114 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Dec 10 15:48:14 crc kubenswrapper[5114]: E1210 15:48:14.234592 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ffd4ccf2-5090-485f-8b42-ca4c2c6f293d-config podName:ffd4ccf2-5090-485f-8b42-ca4c2c6f293d nodeName:}" failed. No retries permitted until 2025-12-10 15:48:14.734567541 +0000 UTC m=+120.455368718 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ffd4ccf2-5090-485f-8b42-ca4c2c6f293d-config") pod "kube-controller-manager-operator-69d5f845f8-kb5vt" (UID: "ffd4ccf2-5090-485f-8b42-ca4c2c6f293d") : failed to sync configmap cache: timed out waiting for the condition Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.241706 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.263786 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.278143 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-llrrx"] Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.280487 5114 request.go:752] "Waited before sending request" delay="1.02034141s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&limit=500&resourceVersion=0" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.282387 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.292157 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp"] Dec 10 15:48:14 crc kubenswrapper[5114]: W1210 15:48:14.293392 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b6e28a6_b1a9_4942_8457_e54258393016.slice/crio-38b1d1a514210c3e81c6b0a98ee6f1b8674ac0c89641c64bda7deb49aa55bb75 WatchSource:0}: Error finding container 38b1d1a514210c3e81c6b0a98ee6f1b8674ac0c89641c64bda7deb49aa55bb75: Status 404 returned error can't find the container with id 38b1d1a514210c3e81c6b0a98ee6f1b8674ac0c89641c64bda7deb49aa55bb75 Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.304698 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m26t8"] Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.305997 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.309866 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-b4cz4"] Dec 10 15:48:14 crc kubenswrapper[5114]: W1210 15:48:14.311118 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4064ac8e_a335_40db_a1d6_38f9e8838fbf.slice/crio-e7bc31ed66a3d06448424ec55c7e1138739d232d60d0eaabbc0c8acd184f9e06 WatchSource:0}: Error finding container e7bc31ed66a3d06448424ec55c7e1138739d232d60d0eaabbc0c8acd184f9e06: Status 404 returned error can't find the container with id e7bc31ed66a3d06448424ec55c7e1138739d232d60d0eaabbc0c8acd184f9e06 Dec 10 15:48:14 crc kubenswrapper[5114]: W1210 15:48:14.317695 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1703270b_65b8_4361_a26e_f6b5475b01d0.slice/crio-c36a7eb58bc7ece610722562a3f104aea6acfa25a52b291aba7700c5c20237a8 WatchSource:0}: Error finding container c36a7eb58bc7ece610722562a3f104aea6acfa25a52b291aba7700c5c20237a8: Status 404 returned error can't find the container with id c36a7eb58bc7ece610722562a3f104aea6acfa25a52b291aba7700c5c20237a8 Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.322516 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.341692 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.362178 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.381892 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.401990 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.429866 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.442427 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.464994 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.483478 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.501680 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.521724 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.542089 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.561259 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.574155 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.582771 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.600822 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.622428 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.642093 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.661646 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.682178 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.702242 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.722509 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.741126 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.761569 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.773994 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffd4ccf2-5090-485f-8b42-ca4c2c6f293d-config\") pod \"kube-controller-manager-operator-69d5f845f8-kb5vt\" (UID: \"ffd4ccf2-5090-485f-8b42-ca4c2c6f293d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kb5vt" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.781130 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.802892 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.820911 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.841786 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 10 15:48:14 crc kubenswrapper[5114]: E1210 15:48:14.860159 5114 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b6e28a6_b1a9_4942_8457_e54258393016.slice/crio-1b1a8fa0e80fd36fe13e3dd77a7af89a418a45139b9e394260c5c24cb90fde7c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b6e28a6_b1a9_4942_8457_e54258393016.slice/crio-conmon-1b1a8fa0e80fd36fe13e3dd77a7af89a418a45139b9e394260c5c24cb90fde7c.scope\": RecentStats: unable to find data in memory cache]" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.861437 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.881580 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.901687 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.920734 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.941592 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.961124 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 10 15:48:14 crc kubenswrapper[5114]: I1210 15:48:14.981680 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.001658 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.021570 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.041159 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.061964 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.073362 5114 generic.go:358] "Generic (PLEG): container finished" podID="8b6e28a6-b1a9-4942-8457-e54258393016" containerID="1b1a8fa0e80fd36fe13e3dd77a7af89a418a45139b9e394260c5c24cb90fde7c" exitCode=0 Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.073471 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" event={"ID":"8b6e28a6-b1a9-4942-8457-e54258393016","Type":"ContainerDied","Data":"1b1a8fa0e80fd36fe13e3dd77a7af89a418a45139b9e394260c5c24cb90fde7c"} Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.073757 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" event={"ID":"8b6e28a6-b1a9-4942-8457-e54258393016","Type":"ContainerStarted","Data":"38b1d1a514210c3e81c6b0a98ee6f1b8674ac0c89641c64bda7deb49aa55bb75"} Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.076107 5114 generic.go:358] "Generic (PLEG): container finished" podID="4064ac8e-a335-40db-a1d6-38f9e8838fbf" containerID="b01fd5eeaa43a8bf34bcfc9c6202a28c5e8499e4ad49561819bc96bdbc2e5b7e" exitCode=0 Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.076449 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp" event={"ID":"4064ac8e-a335-40db-a1d6-38f9e8838fbf","Type":"ContainerDied","Data":"b01fd5eeaa43a8bf34bcfc9c6202a28c5e8499e4ad49561819bc96bdbc2e5b7e"} Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.076547 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp" event={"ID":"4064ac8e-a335-40db-a1d6-38f9e8838fbf","Type":"ContainerStarted","Data":"e7bc31ed66a3d06448424ec55c7e1138739d232d60d0eaabbc0c8acd184f9e06"} Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.078627 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m26t8" event={"ID":"1703270b-65b8-4361-a26e-f6b5475b01d0","Type":"ContainerStarted","Data":"a73f8a4de659970dfcaaf294b4ddca314c5d7942578fa736a31ba15c04a822de"} Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.078681 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m26t8" event={"ID":"1703270b-65b8-4361-a26e-f6b5475b01d0","Type":"ContainerStarted","Data":"c36a7eb58bc7ece610722562a3f104aea6acfa25a52b291aba7700c5c20237a8"} Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.082342 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.084401 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wbl48" event={"ID":"3a3e165c-439d-4282-b1e7-179dca439343","Type":"ContainerStarted","Data":"6b40129f5926d2cebbaaddb1972e9b85051b2e498c42b26f8bcace308b4b691e"} Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.086231 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-b4cz4" event={"ID":"44f3c5cd-6bfe-4b81-b822-bfb31ef6e223","Type":"ContainerStarted","Data":"736b9355d566c55770ee0d13b3894d28618807f7b8da27301b2a120bf2f471ea"} Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.086305 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-b4cz4" event={"ID":"44f3c5cd-6bfe-4b81-b822-bfb31ef6e223","Type":"ContainerStarted","Data":"baba74cda328cd55aaa3298275db428003c430c805549b9ed8c5696fdc37ffce"} Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.086320 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-b4cz4" event={"ID":"44f3c5cd-6bfe-4b81-b822-bfb31ef6e223","Type":"ContainerStarted","Data":"3e373dd05e9440973799bf7b703cb8f82818cb654fa6ee5be95d3377ee2f5f40"} Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.102520 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.121266 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.141482 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.162804 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.182029 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.261825 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n77jx\" (UniqueName: \"kubernetes.io/projected/9d8735f9-6304-4571-a4ef-490336afe153-kube-api-access-n77jx\") pod \"machine-config-operator-67c9d58cbb-mr6mk\" (UID: \"9d8735f9-6304-4571-a4ef-490336afe153\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mr6mk" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.299931 5114 request.go:752] "Waited before sending request" delay="1.966986561s" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/serviceaccounts/default/token" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.366466 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qjj9\" (UniqueName: \"kubernetes.io/projected/0342172d-59ba-477b-8044-ed71dabb4eed-kube-api-access-7qjj9\") pod \"multus-admission-controller-69db94689b-zqx8l\" (UID: \"0342172d-59ba-477b-8044-ed71dabb4eed\") " pod="openshift-multus/multus-admission-controller-69db94689b-zqx8l" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.448753 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/00c50168-1c40-4c3d-9a03-c99c13223df8-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-wx9kv\" (UID: \"00c50168-1c40-4c3d-9a03-c99c13223df8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wx9kv" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.521844 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.541073 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.561656 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.582233 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.601876 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.621370 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.641352 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.681892 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.685012 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffd4ccf2-5090-485f-8b42-ca4c2c6f293d-config\") pod \"kube-controller-manager-operator-69d5f845f8-kb5vt\" (UID: \"ffd4ccf2-5090-485f-8b42-ca4c2c6f293d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kb5vt" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.694151 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0336e7c6-4749-46b7-8709-0b03b511147d-tmp-dir\") pod \"kube-apiserver-operator-575994946d-dsvk5\" (UID: \"0336e7c6-4749-46b7-8709-0b03b511147d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-dsvk5" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.694201 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2b61e86-45b8-4491-8236-f056a381a5ab-trusted-ca-bundle\") pod \"console-64d44f6ddf-59hqn\" (UID: \"b2b61e86-45b8-4491-8236-f056a381a5ab\") " pod="openshift-console/console-64d44f6ddf-59hqn" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.694233 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b51af6b1-547c-4709-b115-93e1173bca33-config\") pod \"machine-approver-54c688565-lwbkt\" (UID: \"b51af6b1-547c-4709-b115-93e1173bca33\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-lwbkt" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.694298 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9z9h\" (UniqueName: \"kubernetes.io/projected/64a2e767-3d9b-4af5-8889-ab3f2b41a071-kube-api-access-s9z9h\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.694358 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/64a2e767-3d9b-4af5-8889-ab3f2b41a071-registry-tls\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.694389 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc7n5\" (UniqueName: \"kubernetes.io/projected/b51af6b1-547c-4709-b115-93e1173bca33-kube-api-access-nc7n5\") pod \"machine-approver-54c688565-lwbkt\" (UID: \"b51af6b1-547c-4709-b115-93e1173bca33\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-lwbkt" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.694430 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b2b61e86-45b8-4491-8236-f056a381a5ab-console-serving-cert\") pod \"console-64d44f6ddf-59hqn\" (UID: \"b2b61e86-45b8-4491-8236-f056a381a5ab\") " pod="openshift-console/console-64d44f6ddf-59hqn" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.694467 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b51af6b1-547c-4709-b115-93e1173bca33-machine-approver-tls\") pod \"machine-approver-54c688565-lwbkt\" (UID: \"b51af6b1-547c-4709-b115-93e1173bca33\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-lwbkt" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.694497 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b51af6b1-547c-4709-b115-93e1173bca33-auth-proxy-config\") pod \"machine-approver-54c688565-lwbkt\" (UID: \"b51af6b1-547c-4709-b115-93e1173bca33\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-lwbkt" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.694532 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0336e7c6-4749-46b7-8709-0b03b511147d-serving-cert\") pod \"kube-apiserver-operator-575994946d-dsvk5\" (UID: \"0336e7c6-4749-46b7-8709-0b03b511147d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-dsvk5" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.694559 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0336e7c6-4749-46b7-8709-0b03b511147d-kube-api-access\") pod \"kube-apiserver-operator-575994946d-dsvk5\" (UID: \"0336e7c6-4749-46b7-8709-0b03b511147d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-dsvk5" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.694638 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b2b61e86-45b8-4491-8236-f056a381a5ab-console-config\") pod \"console-64d44f6ddf-59hqn\" (UID: \"b2b61e86-45b8-4491-8236-f056a381a5ab\") " pod="openshift-console/console-64d44f6ddf-59hqn" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.694683 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/64a2e767-3d9b-4af5-8889-ab3f2b41a071-ca-trust-extracted\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.694759 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/64a2e767-3d9b-4af5-8889-ab3f2b41a071-registry-certificates\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.694792 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltlkb\" (UniqueName: \"kubernetes.io/projected/7d00e7eb-f974-4213-90bd-aeef8bed3a8a-kube-api-access-ltlkb\") pod \"openshift-config-operator-5777786469-v9phm\" (UID: \"7d00e7eb-f974-4213-90bd-aeef8bed3a8a\") " pod="openshift-config-operator/openshift-config-operator-5777786469-v9phm" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.694838 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95096727-f31b-4fd3-914a-152df463c991-config\") pod \"machine-api-operator-755bb95488-db5ff\" (UID: \"95096727-f31b-4fd3-914a-152df463c991\") " pod="openshift-machine-api/machine-api-operator-755bb95488-db5ff" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.694867 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d00e7eb-f974-4213-90bd-aeef8bed3a8a-serving-cert\") pod \"openshift-config-operator-5777786469-v9phm\" (UID: \"7d00e7eb-f974-4213-90bd-aeef8bed3a8a\") " pod="openshift-config-operator/openshift-config-operator-5777786469-v9phm" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.694986 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/64a2e767-3d9b-4af5-8889-ab3f2b41a071-installation-pull-secrets\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.695032 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.695054 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7d00e7eb-f974-4213-90bd-aeef8bed3a8a-available-featuregates\") pod \"openshift-config-operator-5777786469-v9phm\" (UID: \"7d00e7eb-f974-4213-90bd-aeef8bed3a8a\") " pod="openshift-config-operator/openshift-config-operator-5777786469-v9phm" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.695073 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b2b61e86-45b8-4491-8236-f056a381a5ab-console-oauth-config\") pod \"console-64d44f6ddf-59hqn\" (UID: \"b2b61e86-45b8-4491-8236-f056a381a5ab\") " pod="openshift-console/console-64d44f6ddf-59hqn" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.695103 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn89q\" (UniqueName: \"kubernetes.io/projected/53ed4e9f-c2c3-47bc-a20b-42db16b6d57a-kube-api-access-fn89q\") pod \"machine-config-controller-f9cdd68f7-jzw4f\" (UID: \"53ed4e9f-c2c3-47bc-a20b-42db16b6d57a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jzw4f" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.695132 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/95096727-f31b-4fd3-914a-152df463c991-images\") pod \"machine-api-operator-755bb95488-db5ff\" (UID: \"95096727-f31b-4fd3-914a-152df463c991\") " pod="openshift-machine-api/machine-api-operator-755bb95488-db5ff" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.695163 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0336e7c6-4749-46b7-8709-0b03b511147d-config\") pod \"kube-apiserver-operator-575994946d-dsvk5\" (UID: \"0336e7c6-4749-46b7-8709-0b03b511147d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-dsvk5" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.695190 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/53ed4e9f-c2c3-47bc-a20b-42db16b6d57a-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-jzw4f\" (UID: \"53ed4e9f-c2c3-47bc-a20b-42db16b6d57a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jzw4f" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.695219 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/64a2e767-3d9b-4af5-8889-ab3f2b41a071-trusted-ca\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.695234 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zn89\" (UniqueName: \"kubernetes.io/projected/b2b61e86-45b8-4491-8236-f056a381a5ab-kube-api-access-9zn89\") pod \"console-64d44f6ddf-59hqn\" (UID: \"b2b61e86-45b8-4491-8236-f056a381a5ab\") " pod="openshift-console/console-64d44f6ddf-59hqn" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.695251 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/53ed4e9f-c2c3-47bc-a20b-42db16b6d57a-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-jzw4f\" (UID: \"53ed4e9f-c2c3-47bc-a20b-42db16b6d57a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jzw4f" Dec 10 15:48:15 crc kubenswrapper[5114]: E1210 15:48:15.695401 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:16.195384481 +0000 UTC m=+121.916185658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.695470 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b2b61e86-45b8-4491-8236-f056a381a5ab-service-ca\") pod \"console-64d44f6ddf-59hqn\" (UID: \"b2b61e86-45b8-4491-8236-f056a381a5ab\") " pod="openshift-console/console-64d44f6ddf-59hqn" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.695488 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b2b61e86-45b8-4491-8236-f056a381a5ab-oauth-serving-cert\") pod \"console-64d44f6ddf-59hqn\" (UID: \"b2b61e86-45b8-4491-8236-f056a381a5ab\") " pod="openshift-console/console-64d44f6ddf-59hqn" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.695506 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/64a2e767-3d9b-4af5-8889-ab3f2b41a071-bound-sa-token\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.695524 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/95096727-f31b-4fd3-914a-152df463c991-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-db5ff\" (UID: \"95096727-f31b-4fd3-914a-152df463c991\") " pod="openshift-machine-api/machine-api-operator-755bb95488-db5ff" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.695582 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n47kc\" (UniqueName: \"kubernetes.io/projected/95096727-f31b-4fd3-914a-152df463c991-kube-api-access-n47kc\") pod \"machine-api-operator-755bb95488-db5ff\" (UID: \"95096727-f31b-4fd3-914a-152df463c991\") " pod="openshift-machine-api/machine-api-operator-755bb95488-db5ff" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.701430 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.713325 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kb5vt" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.721993 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.731513 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb1fe217-9de4-455e-80e1-dd01805e7935-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-wbclb\" (UID: \"cb1fe217-9de4-455e-80e1-dd01805e7935\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wbclb" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.741743 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.763681 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.782168 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.796581 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.797029 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/014c41e7-892d-4fbc-ad4b-f2cd257e83b3-webhook-cert\") pod \"packageserver-7d4fc7d867-vxnbb\" (UID: \"014c41e7-892d-4fbc-ad4b-f2cd257e83b3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-vxnbb" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.797210 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0336e7c6-4749-46b7-8709-0b03b511147d-serving-cert\") pod \"kube-apiserver-operator-575994946d-dsvk5\" (UID: \"0336e7c6-4749-46b7-8709-0b03b511147d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-dsvk5" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.797304 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0336e7c6-4749-46b7-8709-0b03b511147d-kube-api-access\") pod \"kube-apiserver-operator-575994946d-dsvk5\" (UID: \"0336e7c6-4749-46b7-8709-0b03b511147d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-dsvk5" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.797358 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/014c41e7-892d-4fbc-ad4b-f2cd257e83b3-tmpfs\") pod \"packageserver-7d4fc7d867-vxnbb\" (UID: \"014c41e7-892d-4fbc-ad4b-f2cd257e83b3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-vxnbb" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.797434 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1cce5f28-0219-4980-b7bd-26cbfcbe6435-tmp\") pod \"marketplace-operator-547dbd544d-wpjqd\" (UID: \"1cce5f28-0219-4980-b7bd-26cbfcbe6435\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-wpjqd" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.797488 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/edeb5b7f-b7b2-4b21-a634-f9113bbe9487-config-volume\") pod \"dns-default-dnk6l\" (UID: \"edeb5b7f-b7b2-4b21-a634-f9113bbe9487\") " pod="openshift-dns/dns-default-dnk6l" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.797604 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b2b61e86-45b8-4491-8236-f056a381a5ab-console-config\") pod \"console-64d44f6ddf-59hqn\" (UID: \"b2b61e86-45b8-4491-8236-f056a381a5ab\") " pod="openshift-console/console-64d44f6ddf-59hqn" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.797667 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7fddbf1c-72d3-474b-b262-e852d4ea917b-signing-cabundle\") pod \"service-ca-74545575db-wp2cx\" (UID: \"7fddbf1c-72d3-474b-b262-e852d4ea917b\") " pod="openshift-service-ca/service-ca-74545575db-wp2cx" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.797719 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd597a8b-7d1b-45bb-9360-cf20173b91c9-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-z72bq\" (UID: \"dd597a8b-7d1b-45bb-9360-cf20173b91c9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z72bq" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.797833 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/64a2e767-3d9b-4af5-8889-ab3f2b41a071-ca-trust-extracted\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.797897 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2e757457-618f-4625-8008-3cb8989aa882-plugins-dir\") pod \"csi-hostpathplugin-j45nf\" (UID: \"2e757457-618f-4625-8008-3cb8989aa882\") " pod="hostpath-provisioner/csi-hostpathplugin-j45nf" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.797949 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/dd597a8b-7d1b-45bb-9360-cf20173b91c9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-z72bq\" (UID: \"dd597a8b-7d1b-45bb-9360-cf20173b91c9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z72bq" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.798046 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1cce5f28-0219-4980-b7bd-26cbfcbe6435-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-wpjqd\" (UID: \"1cce5f28-0219-4980-b7bd-26cbfcbe6435\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-wpjqd" Dec 10 15:48:15 crc kubenswrapper[5114]: E1210 15:48:15.798156 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:16.298118215 +0000 UTC m=+122.018919432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.798303 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79e5de70-9480-4091-8467-73e7b3d12424-config-volume\") pod \"collect-profiles-29423025-zw42q\" (UID: \"79e5de70-9480-4091-8467-73e7b3d12424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-zw42q" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.798424 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/edeb5b7f-b7b2-4b21-a634-f9113bbe9487-metrics-tls\") pod \"dns-default-dnk6l\" (UID: \"edeb5b7f-b7b2-4b21-a634-f9113bbe9487\") " pod="openshift-dns/dns-default-dnk6l" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.799901 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/64a2e767-3d9b-4af5-8889-ab3f2b41a071-ca-trust-extracted\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.801020 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.802458 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhsrs\" (UniqueName: \"kubernetes.io/projected/fbc81518-f2e1-452d-a57e-d52678bf4359-kube-api-access-fhsrs\") pod \"service-ca-operator-5b9c976747-8bnxz\" (UID: \"fbc81518-f2e1-452d-a57e-d52678bf4359\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8bnxz" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.802519 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gws25\" (UniqueName: \"kubernetes.io/projected/1cce5f28-0219-4980-b7bd-26cbfcbe6435-kube-api-access-gws25\") pod \"marketplace-operator-547dbd544d-wpjqd\" (UID: \"1cce5f28-0219-4980-b7bd-26cbfcbe6435\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-wpjqd" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.802562 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/64a2e767-3d9b-4af5-8889-ab3f2b41a071-registry-certificates\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.802588 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/edeb5b7f-b7b2-4b21-a634-f9113bbe9487-tmp-dir\") pod \"dns-default-dnk6l\" (UID: \"edeb5b7f-b7b2-4b21-a634-f9113bbe9487\") " pod="openshift-dns/dns-default-dnk6l" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.802612 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b73fffad-3220-4b21-9fd2-046191bf30ab-serving-cert\") pod \"route-controller-manager-776cdc94d6-gzfvl\" (UID: \"b73fffad-3220-4b21-9fd2-046191bf30ab\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gzfvl" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.802639 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7d243eb-5e31-4635-803d-2408fe9f8575-config\") pod \"controller-manager-65b6cccf98-d6hj2\" (UID: \"c7d243eb-5e31-4635-803d-2408fe9f8575\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-d6hj2" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.802664 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/79e5de70-9480-4091-8467-73e7b3d12424-secret-volume\") pod \"collect-profiles-29423025-zw42q\" (UID: \"79e5de70-9480-4091-8467-73e7b3d12424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-zw42q" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.802693 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28vz2\" (UniqueName: \"kubernetes.io/projected/d6e3098c-67bd-4d09-b1f5-04309f94d5ac-kube-api-access-28vz2\") pod \"olm-operator-5cdf44d969-2bl74\" (UID: \"d6e3098c-67bd-4d09-b1f5-04309f94d5ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-2bl74" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.802728 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ltlkb\" (UniqueName: \"kubernetes.io/projected/7d00e7eb-f974-4213-90bd-aeef8bed3a8a-kube-api-access-ltlkb\") pod \"openshift-config-operator-5777786469-v9phm\" (UID: \"7d00e7eb-f974-4213-90bd-aeef8bed3a8a\") " pod="openshift-config-operator/openshift-config-operator-5777786469-v9phm" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.802805 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/4ff01055-87cd-4379-ba86-8778485be566-ready\") pod \"cni-sysctl-allowlist-ds-55xzh\" (UID: \"4ff01055-87cd-4379-ba86-8778485be566\") " pod="openshift-multus/cni-sysctl-allowlist-ds-55xzh" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.802836 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d6e3098c-67bd-4d09-b1f5-04309f94d5ac-tmpfs\") pod \"olm-operator-5cdf44d969-2bl74\" (UID: \"d6e3098c-67bd-4d09-b1f5-04309f94d5ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-2bl74" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.802865 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95096727-f31b-4fd3-914a-152df463c991-config\") pod \"machine-api-operator-755bb95488-db5ff\" (UID: \"95096727-f31b-4fd3-914a-152df463c991\") " pod="openshift-machine-api/machine-api-operator-755bb95488-db5ff" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.802884 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d00e7eb-f974-4213-90bd-aeef8bed3a8a-serving-cert\") pod \"openshift-config-operator-5777786469-v9phm\" (UID: \"7d00e7eb-f974-4213-90bd-aeef8bed3a8a\") " pod="openshift-config-operator/openshift-config-operator-5777786469-v9phm" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.802906 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8spz2\" (UniqueName: \"kubernetes.io/projected/26a9e3c3-f100-41fa-81ea-2790ebff1438-kube-api-access-8spz2\") pod \"package-server-manager-77f986bd66-lskwt\" (UID: \"26a9e3c3-f100-41fa-81ea-2790ebff1438\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-lskwt" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.802931 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dd597a8b-7d1b-45bb-9360-cf20173b91c9-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-z72bq\" (UID: \"dd597a8b-7d1b-45bb-9360-cf20173b91c9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z72bq" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.802960 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/64a2e767-3d9b-4af5-8889-ab3f2b41a071-installation-pull-secrets\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.802980 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmsbn\" (UniqueName: \"kubernetes.io/projected/397fe639-b4d1-4a13-9327-50661b7f938a-kube-api-access-wmsbn\") pod \"catalog-operator-75ff9f647d-w2skq\" (UID: \"397fe639-b4d1-4a13-9327-50661b7f938a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-w2skq" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.802997 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bspn9\" (UniqueName: \"kubernetes.io/projected/dd597a8b-7d1b-45bb-9360-cf20173b91c9-kube-api-access-bspn9\") pod \"cluster-image-registry-operator-86c45576b9-z72bq\" (UID: \"dd597a8b-7d1b-45bb-9360-cf20173b91c9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z72bq" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.803014 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/dd597a8b-7d1b-45bb-9360-cf20173b91c9-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-z72bq\" (UID: \"dd597a8b-7d1b-45bb-9360-cf20173b91c9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z72bq" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.803048 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvblc\" (UniqueName: \"kubernetes.io/projected/b73fffad-3220-4b21-9fd2-046191bf30ab-kube-api-access-xvblc\") pod \"route-controller-manager-776cdc94d6-gzfvl\" (UID: \"b73fffad-3220-4b21-9fd2-046191bf30ab\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gzfvl" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.803065 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4ff01055-87cd-4379-ba86-8778485be566-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-55xzh\" (UID: \"4ff01055-87cd-4379-ba86-8778485be566\") " pod="openshift-multus/cni-sysctl-allowlist-ds-55xzh" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.803100 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b73fffad-3220-4b21-9fd2-046191bf30ab-tmp\") pod \"route-controller-manager-776cdc94d6-gzfvl\" (UID: \"b73fffad-3220-4b21-9fd2-046191bf30ab\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gzfvl" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.803137 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.803157 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7d00e7eb-f974-4213-90bd-aeef8bed3a8a-available-featuregates\") pod \"openshift-config-operator-5777786469-v9phm\" (UID: \"7d00e7eb-f974-4213-90bd-aeef8bed3a8a\") " pod="openshift-config-operator/openshift-config-operator-5777786469-v9phm" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.803174 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b2b61e86-45b8-4491-8236-f056a381a5ab-console-oauth-config\") pod \"console-64d44f6ddf-59hqn\" (UID: \"b2b61e86-45b8-4491-8236-f056a381a5ab\") " pod="openshift-console/console-64d44f6ddf-59hqn" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.803191 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7d243eb-5e31-4635-803d-2408fe9f8575-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-d6hj2\" (UID: \"c7d243eb-5e31-4635-803d-2408fe9f8575\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-d6hj2" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.803222 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/397fe639-b4d1-4a13-9327-50661b7f938a-tmpfs\") pod \"catalog-operator-75ff9f647d-w2skq\" (UID: \"397fe639-b4d1-4a13-9327-50661b7f938a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-w2skq" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.803238 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28b80315-d885-48d0-b39a-2cb9620c5a71-cert\") pod \"ingress-canary-gg274\" (UID: \"28b80315-d885-48d0-b39a-2cb9620c5a71\") " pod="openshift-ingress-canary/ingress-canary-gg274" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.803293 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fn89q\" (UniqueName: \"kubernetes.io/projected/53ed4e9f-c2c3-47bc-a20b-42db16b6d57a-kube-api-access-fn89q\") pod \"machine-config-controller-f9cdd68f7-jzw4f\" (UID: \"53ed4e9f-c2c3-47bc-a20b-42db16b6d57a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jzw4f" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.803331 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/95096727-f31b-4fd3-914a-152df463c991-images\") pod \"machine-api-operator-755bb95488-db5ff\" (UID: \"95096727-f31b-4fd3-914a-152df463c991\") " pod="openshift-machine-api/machine-api-operator-755bb95488-db5ff" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.803362 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b73fffad-3220-4b21-9fd2-046191bf30ab-client-ca\") pod \"route-controller-manager-776cdc94d6-gzfvl\" (UID: \"b73fffad-3220-4b21-9fd2-046191bf30ab\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gzfvl" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.803378 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7d243eb-5e31-4635-803d-2408fe9f8575-serving-cert\") pod \"controller-manager-65b6cccf98-d6hj2\" (UID: \"c7d243eb-5e31-4635-803d-2408fe9f8575\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-d6hj2" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.803396 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnghc\" (UniqueName: \"kubernetes.io/projected/c7d243eb-5e31-4635-803d-2408fe9f8575-kube-api-access-hnghc\") pod \"controller-manager-65b6cccf98-d6hj2\" (UID: \"c7d243eb-5e31-4635-803d-2408fe9f8575\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-d6hj2" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.803421 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2bf2fd29-b4b2-4669-a9a8-99c061aa98c8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-dslgq\" (UID: \"2bf2fd29-b4b2-4669-a9a8-99c061aa98c8\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dslgq" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.803441 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmf2c\" (UniqueName: \"kubernetes.io/projected/a91bd0cb-9575-41db-ac31-b2eef142f4da-kube-api-access-fmf2c\") pod \"machine-config-server-hxjhm\" (UID: \"a91bd0cb-9575-41db-ac31-b2eef142f4da\") " pod="openshift-machine-config-operator/machine-config-server-hxjhm" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.803472 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0336e7c6-4749-46b7-8709-0b03b511147d-config\") pod \"kube-apiserver-operator-575994946d-dsvk5\" (UID: \"0336e7c6-4749-46b7-8709-0b03b511147d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-dsvk5" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.803490 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/397fe639-b4d1-4a13-9327-50661b7f938a-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-w2skq\" (UID: \"397fe639-b4d1-4a13-9327-50661b7f938a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-w2skq" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.803505 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8cms\" (UniqueName: \"kubernetes.io/projected/28b80315-d885-48d0-b39a-2cb9620c5a71-kube-api-access-j8cms\") pod \"ingress-canary-gg274\" (UID: \"28b80315-d885-48d0-b39a-2cb9620c5a71\") " pod="openshift-ingress-canary/ingress-canary-gg274" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.803523 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7d243eb-5e31-4635-803d-2408fe9f8575-client-ca\") pod \"controller-manager-65b6cccf98-d6hj2\" (UID: \"c7d243eb-5e31-4635-803d-2408fe9f8575\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-d6hj2" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.803644 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/53ed4e9f-c2c3-47bc-a20b-42db16b6d57a-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-jzw4f\" (UID: \"53ed4e9f-c2c3-47bc-a20b-42db16b6d57a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jzw4f" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.803703 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/64a2e767-3d9b-4af5-8889-ab3f2b41a071-trusted-ca\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.803723 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9zn89\" (UniqueName: \"kubernetes.io/projected/b2b61e86-45b8-4491-8236-f056a381a5ab-kube-api-access-9zn89\") pod \"console-64d44f6ddf-59hqn\" (UID: \"b2b61e86-45b8-4491-8236-f056a381a5ab\") " pod="openshift-console/console-64d44f6ddf-59hqn" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.803740 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d6e3098c-67bd-4d09-b1f5-04309f94d5ac-profile-collector-cert\") pod \"olm-operator-5cdf44d969-2bl74\" (UID: \"d6e3098c-67bd-4d09-b1f5-04309f94d5ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-2bl74" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.803768 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/53ed4e9f-c2c3-47bc-a20b-42db16b6d57a-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-jzw4f\" (UID: \"53ed4e9f-c2c3-47bc-a20b-42db16b6d57a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jzw4f" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.803797 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksrx9\" (UniqueName: \"kubernetes.io/projected/edeb5b7f-b7b2-4b21-a634-f9113bbe9487-kube-api-access-ksrx9\") pod \"dns-default-dnk6l\" (UID: \"edeb5b7f-b7b2-4b21-a634-f9113bbe9487\") " pod="openshift-dns/dns-default-dnk6l" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.803858 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b2b61e86-45b8-4491-8236-f056a381a5ab-service-ca\") pod \"console-64d44f6ddf-59hqn\" (UID: \"b2b61e86-45b8-4491-8236-f056a381a5ab\") " pod="openshift-console/console-64d44f6ddf-59hqn" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.803879 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b2b61e86-45b8-4491-8236-f056a381a5ab-oauth-serving-cert\") pod \"console-64d44f6ddf-59hqn\" (UID: \"b2b61e86-45b8-4491-8236-f056a381a5ab\") " pod="openshift-console/console-64d44f6ddf-59hqn" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.803961 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/64a2e767-3d9b-4af5-8889-ab3f2b41a071-bound-sa-token\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.803981 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/95096727-f31b-4fd3-914a-152df463c991-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-db5ff\" (UID: \"95096727-f31b-4fd3-914a-152df463c991\") " pod="openshift-machine-api/machine-api-operator-755bb95488-db5ff" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.804014 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b73fffad-3220-4b21-9fd2-046191bf30ab-config\") pod \"route-controller-manager-776cdc94d6-gzfvl\" (UID: \"b73fffad-3220-4b21-9fd2-046191bf30ab\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gzfvl" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.804033 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnh4h\" (UniqueName: \"kubernetes.io/projected/014c41e7-892d-4fbc-ad4b-f2cd257e83b3-kube-api-access-lnh4h\") pod \"packageserver-7d4fc7d867-vxnbb\" (UID: \"014c41e7-892d-4fbc-ad4b-f2cd257e83b3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-vxnbb" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.804176 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2e757457-618f-4625-8008-3cb8989aa882-socket-dir\") pod \"csi-hostpathplugin-j45nf\" (UID: \"2e757457-618f-4625-8008-3cb8989aa882\") " pod="hostpath-provisioner/csi-hostpathplugin-j45nf" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.804198 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2e757457-618f-4625-8008-3cb8989aa882-mountpoint-dir\") pod \"csi-hostpathplugin-j45nf\" (UID: \"2e757457-618f-4625-8008-3cb8989aa882\") " pod="hostpath-provisioner/csi-hostpathplugin-j45nf" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.804219 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n47kc\" (UniqueName: \"kubernetes.io/projected/95096727-f31b-4fd3-914a-152df463c991-kube-api-access-n47kc\") pod \"machine-api-operator-755bb95488-db5ff\" (UID: \"95096727-f31b-4fd3-914a-152df463c991\") " pod="openshift-machine-api/machine-api-operator-755bb95488-db5ff" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.804239 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2e757457-618f-4625-8008-3cb8989aa882-csi-data-dir\") pod \"csi-hostpathplugin-j45nf\" (UID: \"2e757457-618f-4625-8008-3cb8989aa882\") " pod="hostpath-provisioner/csi-hostpathplugin-j45nf" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.804257 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdpwv\" (UniqueName: \"kubernetes.io/projected/2e757457-618f-4625-8008-3cb8989aa882-kube-api-access-rdpwv\") pod \"csi-hostpathplugin-j45nf\" (UID: \"2e757457-618f-4625-8008-3cb8989aa882\") " pod="hostpath-provisioner/csi-hostpathplugin-j45nf" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.804302 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/26a9e3c3-f100-41fa-81ea-2790ebff1438-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-lskwt\" (UID: \"26a9e3c3-f100-41fa-81ea-2790ebff1438\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-lskwt" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.804324 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwt84\" (UniqueName: \"kubernetes.io/projected/4ff01055-87cd-4379-ba86-8778485be566-kube-api-access-zwt84\") pod \"cni-sysctl-allowlist-ds-55xzh\" (UID: \"4ff01055-87cd-4379-ba86-8778485be566\") " pod="openshift-multus/cni-sysctl-allowlist-ds-55xzh" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.804342 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/014c41e7-892d-4fbc-ad4b-f2cd257e83b3-apiservice-cert\") pod \"packageserver-7d4fc7d867-vxnbb\" (UID: \"014c41e7-892d-4fbc-ad4b-f2cd257e83b3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-vxnbb" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.804418 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbc81518-f2e1-452d-a57e-d52678bf4359-config\") pod \"service-ca-operator-5b9c976747-8bnxz\" (UID: \"fbc81518-f2e1-452d-a57e-d52678bf4359\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8bnxz" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.805521 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0336e7c6-4749-46b7-8709-0b03b511147d-tmp-dir\") pod \"kube-apiserver-operator-575994946d-dsvk5\" (UID: \"0336e7c6-4749-46b7-8709-0b03b511147d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-dsvk5" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.805542 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2b61e86-45b8-4491-8236-f056a381a5ab-trusted-ca-bundle\") pod \"console-64d44f6ddf-59hqn\" (UID: \"b2b61e86-45b8-4491-8236-f056a381a5ab\") " pod="openshift-console/console-64d44f6ddf-59hqn" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.805561 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a91bd0cb-9575-41db-ac31-b2eef142f4da-node-bootstrap-token\") pod \"machine-config-server-hxjhm\" (UID: \"a91bd0cb-9575-41db-ac31-b2eef142f4da\") " pod="openshift-machine-config-operator/machine-config-server-hxjhm" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.805593 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b51af6b1-547c-4709-b115-93e1173bca33-config\") pod \"machine-approver-54c688565-lwbkt\" (UID: \"b51af6b1-547c-4709-b115-93e1173bca33\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-lwbkt" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.805612 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7fddbf1c-72d3-474b-b262-e852d4ea917b-signing-key\") pod \"service-ca-74545575db-wp2cx\" (UID: \"7fddbf1c-72d3-474b-b262-e852d4ea917b\") " pod="openshift-service-ca/service-ca-74545575db-wp2cx" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.805629 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c27dl\" (UniqueName: \"kubernetes.io/projected/7fddbf1c-72d3-474b-b262-e852d4ea917b-kube-api-access-c27dl\") pod \"service-ca-74545575db-wp2cx\" (UID: \"7fddbf1c-72d3-474b-b262-e852d4ea917b\") " pod="openshift-service-ca/service-ca-74545575db-wp2cx" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.805667 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s9z9h\" (UniqueName: \"kubernetes.io/projected/64a2e767-3d9b-4af5-8889-ab3f2b41a071-kube-api-access-s9z9h\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.805692 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c7d243eb-5e31-4635-803d-2408fe9f8575-tmp\") pod \"controller-manager-65b6cccf98-d6hj2\" (UID: \"c7d243eb-5e31-4635-803d-2408fe9f8575\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-d6hj2" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.805710 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1cce5f28-0219-4980-b7bd-26cbfcbe6435-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-wpjqd\" (UID: \"1cce5f28-0219-4980-b7bd-26cbfcbe6435\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-wpjqd" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.805734 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/64a2e767-3d9b-4af5-8889-ab3f2b41a071-registry-tls\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.805751 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nc7n5\" (UniqueName: \"kubernetes.io/projected/b51af6b1-547c-4709-b115-93e1173bca33-kube-api-access-nc7n5\") pod \"machine-approver-54c688565-lwbkt\" (UID: \"b51af6b1-547c-4709-b115-93e1173bca33\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-lwbkt" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.805772 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hlbk\" (UniqueName: \"kubernetes.io/projected/2bf2fd29-b4b2-4669-a9a8-99c061aa98c8-kube-api-access-4hlbk\") pod \"control-plane-machine-set-operator-75ffdb6fcd-dslgq\" (UID: \"2bf2fd29-b4b2-4669-a9a8-99c061aa98c8\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dslgq" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.805796 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4ff01055-87cd-4379-ba86-8778485be566-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-55xzh\" (UID: \"4ff01055-87cd-4379-ba86-8778485be566\") " pod="openshift-multus/cni-sysctl-allowlist-ds-55xzh" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.805834 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dd597a8b-7d1b-45bb-9360-cf20173b91c9-tmp\") pod \"cluster-image-registry-operator-86c45576b9-z72bq\" (UID: \"dd597a8b-7d1b-45bb-9360-cf20173b91c9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z72bq" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.805854 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/397fe639-b4d1-4a13-9327-50661b7f938a-srv-cert\") pod \"catalog-operator-75ff9f647d-w2skq\" (UID: \"397fe639-b4d1-4a13-9327-50661b7f938a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-w2skq" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.805871 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwb7v\" (UniqueName: \"kubernetes.io/projected/79e5de70-9480-4091-8467-73e7b3d12424-kube-api-access-nwb7v\") pod \"collect-profiles-29423025-zw42q\" (UID: \"79e5de70-9480-4091-8467-73e7b3d12424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-zw42q" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.805888 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a91bd0cb-9575-41db-ac31-b2eef142f4da-certs\") pod \"machine-config-server-hxjhm\" (UID: \"a91bd0cb-9575-41db-ac31-b2eef142f4da\") " pod="openshift-machine-config-operator/machine-config-server-hxjhm" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.805923 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b2b61e86-45b8-4491-8236-f056a381a5ab-console-serving-cert\") pod \"console-64d44f6ddf-59hqn\" (UID: \"b2b61e86-45b8-4491-8236-f056a381a5ab\") " pod="openshift-console/console-64d44f6ddf-59hqn" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.805938 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbc81518-f2e1-452d-a57e-d52678bf4359-serving-cert\") pod \"service-ca-operator-5b9c976747-8bnxz\" (UID: \"fbc81518-f2e1-452d-a57e-d52678bf4359\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8bnxz" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.805985 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b51af6b1-547c-4709-b115-93e1173bca33-machine-approver-tls\") pod \"machine-approver-54c688565-lwbkt\" (UID: \"b51af6b1-547c-4709-b115-93e1173bca33\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-lwbkt" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.806018 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b51af6b1-547c-4709-b115-93e1173bca33-auth-proxy-config\") pod \"machine-approver-54c688565-lwbkt\" (UID: \"b51af6b1-547c-4709-b115-93e1173bca33\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-lwbkt" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.806064 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d6e3098c-67bd-4d09-b1f5-04309f94d5ac-srv-cert\") pod \"olm-operator-5cdf44d969-2bl74\" (UID: \"d6e3098c-67bd-4d09-b1f5-04309f94d5ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-2bl74" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.806080 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2e757457-618f-4625-8008-3cb8989aa882-registration-dir\") pod \"csi-hostpathplugin-j45nf\" (UID: \"2e757457-618f-4625-8008-3cb8989aa882\") " pod="hostpath-provisioner/csi-hostpathplugin-j45nf" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.804728 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/64a2e767-3d9b-4af5-8889-ab3f2b41a071-registry-certificates\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.806901 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/53ed4e9f-c2c3-47bc-a20b-42db16b6d57a-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-jzw4f\" (UID: \"53ed4e9f-c2c3-47bc-a20b-42db16b6d57a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jzw4f" Dec 10 15:48:15 crc kubenswrapper[5114]: E1210 15:48:15.807771 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:16.307757158 +0000 UTC m=+122.028558335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.808447 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0336e7c6-4749-46b7-8709-0b03b511147d-tmp-dir\") pod \"kube-apiserver-operator-575994946d-dsvk5\" (UID: \"0336e7c6-4749-46b7-8709-0b03b511147d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-dsvk5" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.809651 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7d00e7eb-f974-4213-90bd-aeef8bed3a8a-available-featuregates\") pod \"openshift-config-operator-5777786469-v9phm\" (UID: \"7d00e7eb-f974-4213-90bd-aeef8bed3a8a\") " pod="openshift-config-operator/openshift-config-operator-5777786469-v9phm" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.821858 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.823842 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mr6mk" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.841345 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.867569 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.882764 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.886472 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-zqx8l" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.902173 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.907349 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:15 crc kubenswrapper[5114]: E1210 15:48:15.907607 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:16.407575398 +0000 UTC m=+122.128376615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.908110 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/4ff01055-87cd-4379-ba86-8778485be566-ready\") pod \"cni-sysctl-allowlist-ds-55xzh\" (UID: \"4ff01055-87cd-4379-ba86-8778485be566\") " pod="openshift-multus/cni-sysctl-allowlist-ds-55xzh" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.908177 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d6e3098c-67bd-4d09-b1f5-04309f94d5ac-tmpfs\") pod \"olm-operator-5cdf44d969-2bl74\" (UID: \"d6e3098c-67bd-4d09-b1f5-04309f94d5ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-2bl74" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.908216 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8spz2\" (UniqueName: \"kubernetes.io/projected/26a9e3c3-f100-41fa-81ea-2790ebff1438-kube-api-access-8spz2\") pod \"package-server-manager-77f986bd66-lskwt\" (UID: \"26a9e3c3-f100-41fa-81ea-2790ebff1438\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-lskwt" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.908253 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dd597a8b-7d1b-45bb-9360-cf20173b91c9-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-z72bq\" (UID: \"dd597a8b-7d1b-45bb-9360-cf20173b91c9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z72bq" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.908336 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wmsbn\" (UniqueName: \"kubernetes.io/projected/397fe639-b4d1-4a13-9327-50661b7f938a-kube-api-access-wmsbn\") pod \"catalog-operator-75ff9f647d-w2skq\" (UID: \"397fe639-b4d1-4a13-9327-50661b7f938a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-w2skq" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.908385 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bspn9\" (UniqueName: \"kubernetes.io/projected/dd597a8b-7d1b-45bb-9360-cf20173b91c9-kube-api-access-bspn9\") pod \"cluster-image-registry-operator-86c45576b9-z72bq\" (UID: \"dd597a8b-7d1b-45bb-9360-cf20173b91c9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z72bq" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.908419 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/dd597a8b-7d1b-45bb-9360-cf20173b91c9-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-z72bq\" (UID: \"dd597a8b-7d1b-45bb-9360-cf20173b91c9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z72bq" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.908459 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xvblc\" (UniqueName: \"kubernetes.io/projected/b73fffad-3220-4b21-9fd2-046191bf30ab-kube-api-access-xvblc\") pod \"route-controller-manager-776cdc94d6-gzfvl\" (UID: \"b73fffad-3220-4b21-9fd2-046191bf30ab\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gzfvl" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.908493 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4ff01055-87cd-4379-ba86-8778485be566-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-55xzh\" (UID: \"4ff01055-87cd-4379-ba86-8778485be566\") " pod="openshift-multus/cni-sysctl-allowlist-ds-55xzh" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.908558 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b73fffad-3220-4b21-9fd2-046191bf30ab-tmp\") pod \"route-controller-manager-776cdc94d6-gzfvl\" (UID: \"b73fffad-3220-4b21-9fd2-046191bf30ab\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gzfvl" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.908621 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.908686 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7d243eb-5e31-4635-803d-2408fe9f8575-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-d6hj2\" (UID: \"c7d243eb-5e31-4635-803d-2408fe9f8575\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-d6hj2" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.908743 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/397fe639-b4d1-4a13-9327-50661b7f938a-tmpfs\") pod \"catalog-operator-75ff9f647d-w2skq\" (UID: \"397fe639-b4d1-4a13-9327-50661b7f938a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-w2skq" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.908776 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28b80315-d885-48d0-b39a-2cb9620c5a71-cert\") pod \"ingress-canary-gg274\" (UID: \"28b80315-d885-48d0-b39a-2cb9620c5a71\") " pod="openshift-ingress-canary/ingress-canary-gg274" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.908826 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b73fffad-3220-4b21-9fd2-046191bf30ab-client-ca\") pod \"route-controller-manager-776cdc94d6-gzfvl\" (UID: \"b73fffad-3220-4b21-9fd2-046191bf30ab\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gzfvl" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.908857 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7d243eb-5e31-4635-803d-2408fe9f8575-serving-cert\") pod \"controller-manager-65b6cccf98-d6hj2\" (UID: \"c7d243eb-5e31-4635-803d-2408fe9f8575\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-d6hj2" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.908894 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hnghc\" (UniqueName: \"kubernetes.io/projected/c7d243eb-5e31-4635-803d-2408fe9f8575-kube-api-access-hnghc\") pod \"controller-manager-65b6cccf98-d6hj2\" (UID: \"c7d243eb-5e31-4635-803d-2408fe9f8575\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-d6hj2" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.908946 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2bf2fd29-b4b2-4669-a9a8-99c061aa98c8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-dslgq\" (UID: \"2bf2fd29-b4b2-4669-a9a8-99c061aa98c8\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dslgq" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.908996 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fmf2c\" (UniqueName: \"kubernetes.io/projected/a91bd0cb-9575-41db-ac31-b2eef142f4da-kube-api-access-fmf2c\") pod \"machine-config-server-hxjhm\" (UID: \"a91bd0cb-9575-41db-ac31-b2eef142f4da\") " pod="openshift-machine-config-operator/machine-config-server-hxjhm" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.909077 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/397fe639-b4d1-4a13-9327-50661b7f938a-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-w2skq\" (UID: \"397fe639-b4d1-4a13-9327-50661b7f938a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-w2skq" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.909141 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j8cms\" (UniqueName: \"kubernetes.io/projected/28b80315-d885-48d0-b39a-2cb9620c5a71-kube-api-access-j8cms\") pod \"ingress-canary-gg274\" (UID: \"28b80315-d885-48d0-b39a-2cb9620c5a71\") " pod="openshift-ingress-canary/ingress-canary-gg274" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.909399 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7d243eb-5e31-4635-803d-2408fe9f8575-client-ca\") pod \"controller-manager-65b6cccf98-d6hj2\" (UID: \"c7d243eb-5e31-4635-803d-2408fe9f8575\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-d6hj2" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.909421 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/4ff01055-87cd-4379-ba86-8778485be566-ready\") pod \"cni-sysctl-allowlist-ds-55xzh\" (UID: \"4ff01055-87cd-4379-ba86-8778485be566\") " pod="openshift-multus/cni-sysctl-allowlist-ds-55xzh" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.909501 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d6e3098c-67bd-4d09-b1f5-04309f94d5ac-profile-collector-cert\") pod \"olm-operator-5cdf44d969-2bl74\" (UID: \"d6e3098c-67bd-4d09-b1f5-04309f94d5ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-2bl74" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.909562 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ksrx9\" (UniqueName: \"kubernetes.io/projected/edeb5b7f-b7b2-4b21-a634-f9113bbe9487-kube-api-access-ksrx9\") pod \"dns-default-dnk6l\" (UID: \"edeb5b7f-b7b2-4b21-a634-f9113bbe9487\") " pod="openshift-dns/dns-default-dnk6l" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.909644 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b73fffad-3220-4b21-9fd2-046191bf30ab-config\") pod \"route-controller-manager-776cdc94d6-gzfvl\" (UID: \"b73fffad-3220-4b21-9fd2-046191bf30ab\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gzfvl" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.909698 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lnh4h\" (UniqueName: \"kubernetes.io/projected/014c41e7-892d-4fbc-ad4b-f2cd257e83b3-kube-api-access-lnh4h\") pod \"packageserver-7d4fc7d867-vxnbb\" (UID: \"014c41e7-892d-4fbc-ad4b-f2cd257e83b3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-vxnbb" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.909777 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2e757457-618f-4625-8008-3cb8989aa882-socket-dir\") pod \"csi-hostpathplugin-j45nf\" (UID: \"2e757457-618f-4625-8008-3cb8989aa882\") " pod="hostpath-provisioner/csi-hostpathplugin-j45nf" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.909809 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2e757457-618f-4625-8008-3cb8989aa882-mountpoint-dir\") pod \"csi-hostpathplugin-j45nf\" (UID: \"2e757457-618f-4625-8008-3cb8989aa882\") " pod="hostpath-provisioner/csi-hostpathplugin-j45nf" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.909848 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2e757457-618f-4625-8008-3cb8989aa882-csi-data-dir\") pod \"csi-hostpathplugin-j45nf\" (UID: \"2e757457-618f-4625-8008-3cb8989aa882\") " pod="hostpath-provisioner/csi-hostpathplugin-j45nf" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.909887 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rdpwv\" (UniqueName: \"kubernetes.io/projected/2e757457-618f-4625-8008-3cb8989aa882-kube-api-access-rdpwv\") pod \"csi-hostpathplugin-j45nf\" (UID: \"2e757457-618f-4625-8008-3cb8989aa882\") " pod="hostpath-provisioner/csi-hostpathplugin-j45nf" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.909938 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/26a9e3c3-f100-41fa-81ea-2790ebff1438-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-lskwt\" (UID: \"26a9e3c3-f100-41fa-81ea-2790ebff1438\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-lskwt" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.910002 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zwt84\" (UniqueName: \"kubernetes.io/projected/4ff01055-87cd-4379-ba86-8778485be566-kube-api-access-zwt84\") pod \"cni-sysctl-allowlist-ds-55xzh\" (UID: \"4ff01055-87cd-4379-ba86-8778485be566\") " pod="openshift-multus/cni-sysctl-allowlist-ds-55xzh" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.910055 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/014c41e7-892d-4fbc-ad4b-f2cd257e83b3-apiservice-cert\") pod \"packageserver-7d4fc7d867-vxnbb\" (UID: \"014c41e7-892d-4fbc-ad4b-f2cd257e83b3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-vxnbb" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.910072 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d6e3098c-67bd-4d09-b1f5-04309f94d5ac-tmpfs\") pod \"olm-operator-5cdf44d969-2bl74\" (UID: \"d6e3098c-67bd-4d09-b1f5-04309f94d5ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-2bl74" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.910157 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbc81518-f2e1-452d-a57e-d52678bf4359-config\") pod \"service-ca-operator-5b9c976747-8bnxz\" (UID: \"fbc81518-f2e1-452d-a57e-d52678bf4359\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8bnxz" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.910238 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a91bd0cb-9575-41db-ac31-b2eef142f4da-node-bootstrap-token\") pod \"machine-config-server-hxjhm\" (UID: \"a91bd0cb-9575-41db-ac31-b2eef142f4da\") " pod="openshift-machine-config-operator/machine-config-server-hxjhm" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.910329 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7fddbf1c-72d3-474b-b262-e852d4ea917b-signing-key\") pod \"service-ca-74545575db-wp2cx\" (UID: \"7fddbf1c-72d3-474b-b262-e852d4ea917b\") " pod="openshift-service-ca/service-ca-74545575db-wp2cx" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.910382 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c27dl\" (UniqueName: \"kubernetes.io/projected/7fddbf1c-72d3-474b-b262-e852d4ea917b-kube-api-access-c27dl\") pod \"service-ca-74545575db-wp2cx\" (UID: \"7fddbf1c-72d3-474b-b262-e852d4ea917b\") " pod="openshift-service-ca/service-ca-74545575db-wp2cx" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.910457 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c7d243eb-5e31-4635-803d-2408fe9f8575-tmp\") pod \"controller-manager-65b6cccf98-d6hj2\" (UID: \"c7d243eb-5e31-4635-803d-2408fe9f8575\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-d6hj2" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.910894 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2e757457-618f-4625-8008-3cb8989aa882-csi-data-dir\") pod \"csi-hostpathplugin-j45nf\" (UID: \"2e757457-618f-4625-8008-3cb8989aa882\") " pod="hostpath-provisioner/csi-hostpathplugin-j45nf" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.910926 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/dd597a8b-7d1b-45bb-9360-cf20173b91c9-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-z72bq\" (UID: \"dd597a8b-7d1b-45bb-9360-cf20173b91c9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z72bq" Dec 10 15:48:15 crc kubenswrapper[5114]: E1210 15:48:15.911163 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:16.411133088 +0000 UTC m=+122.131934305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.912092 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4ff01055-87cd-4379-ba86-8778485be566-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-55xzh\" (UID: \"4ff01055-87cd-4379-ba86-8778485be566\") " pod="openshift-multus/cni-sysctl-allowlist-ds-55xzh" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.912151 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b73fffad-3220-4b21-9fd2-046191bf30ab-config\") pod \"route-controller-manager-776cdc94d6-gzfvl\" (UID: \"b73fffad-3220-4b21-9fd2-046191bf30ab\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gzfvl" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.912183 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/397fe639-b4d1-4a13-9327-50661b7f938a-tmpfs\") pod \"catalog-operator-75ff9f647d-w2skq\" (UID: \"397fe639-b4d1-4a13-9327-50661b7f938a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-w2skq" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.912610 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b73fffad-3220-4b21-9fd2-046191bf30ab-client-ca\") pod \"route-controller-manager-776cdc94d6-gzfvl\" (UID: \"b73fffad-3220-4b21-9fd2-046191bf30ab\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gzfvl" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.911167 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b73fffad-3220-4b21-9fd2-046191bf30ab-tmp\") pod \"route-controller-manager-776cdc94d6-gzfvl\" (UID: \"b73fffad-3220-4b21-9fd2-046191bf30ab\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gzfvl" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.912887 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1cce5f28-0219-4980-b7bd-26cbfcbe6435-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-wpjqd\" (UID: \"1cce5f28-0219-4980-b7bd-26cbfcbe6435\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-wpjqd" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.912968 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4hlbk\" (UniqueName: \"kubernetes.io/projected/2bf2fd29-b4b2-4669-a9a8-99c061aa98c8-kube-api-access-4hlbk\") pod \"control-plane-machine-set-operator-75ffdb6fcd-dslgq\" (UID: \"2bf2fd29-b4b2-4669-a9a8-99c061aa98c8\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dslgq" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.913094 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4ff01055-87cd-4379-ba86-8778485be566-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-55xzh\" (UID: \"4ff01055-87cd-4379-ba86-8778485be566\") " pod="openshift-multus/cni-sysctl-allowlist-ds-55xzh" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.913175 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dd597a8b-7d1b-45bb-9360-cf20173b91c9-tmp\") pod \"cluster-image-registry-operator-86c45576b9-z72bq\" (UID: \"dd597a8b-7d1b-45bb-9360-cf20173b91c9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z72bq" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.913586 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/397fe639-b4d1-4a13-9327-50661b7f938a-srv-cert\") pod \"catalog-operator-75ff9f647d-w2skq\" (UID: \"397fe639-b4d1-4a13-9327-50661b7f938a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-w2skq" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.913206 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2e757457-618f-4625-8008-3cb8989aa882-socket-dir\") pod \"csi-hostpathplugin-j45nf\" (UID: \"2e757457-618f-4625-8008-3cb8989aa882\") " pod="hostpath-provisioner/csi-hostpathplugin-j45nf" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.913624 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4ff01055-87cd-4379-ba86-8778485be566-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-55xzh\" (UID: \"4ff01055-87cd-4379-ba86-8778485be566\") " pod="openshift-multus/cni-sysctl-allowlist-ds-55xzh" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.913243 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2e757457-618f-4625-8008-3cb8989aa882-mountpoint-dir\") pod \"csi-hostpathplugin-j45nf\" (UID: \"2e757457-618f-4625-8008-3cb8989aa882\") " pod="hostpath-provisioner/csi-hostpathplugin-j45nf" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.913729 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nwb7v\" (UniqueName: \"kubernetes.io/projected/79e5de70-9480-4091-8467-73e7b3d12424-kube-api-access-nwb7v\") pod \"collect-profiles-29423025-zw42q\" (UID: \"79e5de70-9480-4091-8467-73e7b3d12424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-zw42q" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.913764 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a91bd0cb-9575-41db-ac31-b2eef142f4da-certs\") pod \"machine-config-server-hxjhm\" (UID: \"a91bd0cb-9575-41db-ac31-b2eef142f4da\") " pod="openshift-machine-config-operator/machine-config-server-hxjhm" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.913850 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbc81518-f2e1-452d-a57e-d52678bf4359-serving-cert\") pod \"service-ca-operator-5b9c976747-8bnxz\" (UID: \"fbc81518-f2e1-452d-a57e-d52678bf4359\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8bnxz" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.913867 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c7d243eb-5e31-4635-803d-2408fe9f8575-tmp\") pod \"controller-manager-65b6cccf98-d6hj2\" (UID: \"c7d243eb-5e31-4635-803d-2408fe9f8575\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-d6hj2" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.913936 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d6e3098c-67bd-4d09-b1f5-04309f94d5ac-srv-cert\") pod \"olm-operator-5cdf44d969-2bl74\" (UID: \"d6e3098c-67bd-4d09-b1f5-04309f94d5ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-2bl74" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.913966 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2e757457-618f-4625-8008-3cb8989aa882-registration-dir\") pod \"csi-hostpathplugin-j45nf\" (UID: \"2e757457-618f-4625-8008-3cb8989aa882\") " pod="hostpath-provisioner/csi-hostpathplugin-j45nf" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.913994 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/014c41e7-892d-4fbc-ad4b-f2cd257e83b3-webhook-cert\") pod \"packageserver-7d4fc7d867-vxnbb\" (UID: \"014c41e7-892d-4fbc-ad4b-f2cd257e83b3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-vxnbb" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.914034 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/014c41e7-892d-4fbc-ad4b-f2cd257e83b3-tmpfs\") pod \"packageserver-7d4fc7d867-vxnbb\" (UID: \"014c41e7-892d-4fbc-ad4b-f2cd257e83b3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-vxnbb" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.914060 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1cce5f28-0219-4980-b7bd-26cbfcbe6435-tmp\") pod \"marketplace-operator-547dbd544d-wpjqd\" (UID: \"1cce5f28-0219-4980-b7bd-26cbfcbe6435\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-wpjqd" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.914085 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/edeb5b7f-b7b2-4b21-a634-f9113bbe9487-config-volume\") pod \"dns-default-dnk6l\" (UID: \"edeb5b7f-b7b2-4b21-a634-f9113bbe9487\") " pod="openshift-dns/dns-default-dnk6l" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.914106 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dd597a8b-7d1b-45bb-9360-cf20173b91c9-tmp\") pod \"cluster-image-registry-operator-86c45576b9-z72bq\" (UID: \"dd597a8b-7d1b-45bb-9360-cf20173b91c9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z72bq" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.914125 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7fddbf1c-72d3-474b-b262-e852d4ea917b-signing-cabundle\") pod \"service-ca-74545575db-wp2cx\" (UID: \"7fddbf1c-72d3-474b-b262-e852d4ea917b\") " pod="openshift-service-ca/service-ca-74545575db-wp2cx" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.914148 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd597a8b-7d1b-45bb-9360-cf20173b91c9-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-z72bq\" (UID: \"dd597a8b-7d1b-45bb-9360-cf20173b91c9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z72bq" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.914186 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2e757457-618f-4625-8008-3cb8989aa882-plugins-dir\") pod \"csi-hostpathplugin-j45nf\" (UID: \"2e757457-618f-4625-8008-3cb8989aa882\") " pod="hostpath-provisioner/csi-hostpathplugin-j45nf" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.914213 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/dd597a8b-7d1b-45bb-9360-cf20173b91c9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-z72bq\" (UID: \"dd597a8b-7d1b-45bb-9360-cf20173b91c9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z72bq" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.914252 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1cce5f28-0219-4980-b7bd-26cbfcbe6435-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-wpjqd\" (UID: \"1cce5f28-0219-4980-b7bd-26cbfcbe6435\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-wpjqd" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.914636 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2e757457-618f-4625-8008-3cb8989aa882-plugins-dir\") pod \"csi-hostpathplugin-j45nf\" (UID: \"2e757457-618f-4625-8008-3cb8989aa882\") " pod="hostpath-provisioner/csi-hostpathplugin-j45nf" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.915042 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/edeb5b7f-b7b2-4b21-a634-f9113bbe9487-config-volume\") pod \"dns-default-dnk6l\" (UID: \"edeb5b7f-b7b2-4b21-a634-f9113bbe9487\") " pod="openshift-dns/dns-default-dnk6l" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.915167 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbc81518-f2e1-452d-a57e-d52678bf4359-config\") pod \"service-ca-operator-5b9c976747-8bnxz\" (UID: \"fbc81518-f2e1-452d-a57e-d52678bf4359\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8bnxz" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.915252 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2e757457-618f-4625-8008-3cb8989aa882-registration-dir\") pod \"csi-hostpathplugin-j45nf\" (UID: \"2e757457-618f-4625-8008-3cb8989aa882\") " pod="hostpath-provisioner/csi-hostpathplugin-j45nf" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.915574 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79e5de70-9480-4091-8467-73e7b3d12424-config-volume\") pod \"collect-profiles-29423025-zw42q\" (UID: \"79e5de70-9480-4091-8467-73e7b3d12424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-zw42q" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.916156 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7fddbf1c-72d3-474b-b262-e852d4ea917b-signing-key\") pod \"service-ca-74545575db-wp2cx\" (UID: \"7fddbf1c-72d3-474b-b262-e852d4ea917b\") " pod="openshift-service-ca/service-ca-74545575db-wp2cx" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.916312 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/edeb5b7f-b7b2-4b21-a634-f9113bbe9487-metrics-tls\") pod \"dns-default-dnk6l\" (UID: \"edeb5b7f-b7b2-4b21-a634-f9113bbe9487\") " pod="openshift-dns/dns-default-dnk6l" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.916556 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79e5de70-9480-4091-8467-73e7b3d12424-config-volume\") pod \"collect-profiles-29423025-zw42q\" (UID: \"79e5de70-9480-4091-8467-73e7b3d12424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-zw42q" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.916645 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/26a9e3c3-f100-41fa-81ea-2790ebff1438-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-lskwt\" (UID: \"26a9e3c3-f100-41fa-81ea-2790ebff1438\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-lskwt" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.916808 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fhsrs\" (UniqueName: \"kubernetes.io/projected/fbc81518-f2e1-452d-a57e-d52678bf4359-kube-api-access-fhsrs\") pod \"service-ca-operator-5b9c976747-8bnxz\" (UID: \"fbc81518-f2e1-452d-a57e-d52678bf4359\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8bnxz" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.916867 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gws25\" (UniqueName: \"kubernetes.io/projected/1cce5f28-0219-4980-b7bd-26cbfcbe6435-kube-api-access-gws25\") pod \"marketplace-operator-547dbd544d-wpjqd\" (UID: \"1cce5f28-0219-4980-b7bd-26cbfcbe6435\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-wpjqd" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.916937 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/edeb5b7f-b7b2-4b21-a634-f9113bbe9487-tmp-dir\") pod \"dns-default-dnk6l\" (UID: \"edeb5b7f-b7b2-4b21-a634-f9113bbe9487\") " pod="openshift-dns/dns-default-dnk6l" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.916989 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b73fffad-3220-4b21-9fd2-046191bf30ab-serving-cert\") pod \"route-controller-manager-776cdc94d6-gzfvl\" (UID: \"b73fffad-3220-4b21-9fd2-046191bf30ab\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gzfvl" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.917003 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1cce5f28-0219-4980-b7bd-26cbfcbe6435-tmp\") pod \"marketplace-operator-547dbd544d-wpjqd\" (UID: \"1cce5f28-0219-4980-b7bd-26cbfcbe6435\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-wpjqd" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.917045 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7d243eb-5e31-4635-803d-2408fe9f8575-config\") pod \"controller-manager-65b6cccf98-d6hj2\" (UID: \"c7d243eb-5e31-4635-803d-2408fe9f8575\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-d6hj2" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.917094 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/79e5de70-9480-4091-8467-73e7b3d12424-secret-volume\") pod \"collect-profiles-29423025-zw42q\" (UID: \"79e5de70-9480-4091-8467-73e7b3d12424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-zw42q" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.917150 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-28vz2\" (UniqueName: \"kubernetes.io/projected/d6e3098c-67bd-4d09-b1f5-04309f94d5ac-kube-api-access-28vz2\") pod \"olm-operator-5cdf44d969-2bl74\" (UID: \"d6e3098c-67bd-4d09-b1f5-04309f94d5ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-2bl74" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.917874 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/014c41e7-892d-4fbc-ad4b-f2cd257e83b3-webhook-cert\") pod \"packageserver-7d4fc7d867-vxnbb\" (UID: \"014c41e7-892d-4fbc-ad4b-f2cd257e83b3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-vxnbb" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.918147 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/edeb5b7f-b7b2-4b21-a634-f9113bbe9487-tmp-dir\") pod \"dns-default-dnk6l\" (UID: \"edeb5b7f-b7b2-4b21-a634-f9113bbe9487\") " pod="openshift-dns/dns-default-dnk6l" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.918705 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/014c41e7-892d-4fbc-ad4b-f2cd257e83b3-tmpfs\") pod \"packageserver-7d4fc7d867-vxnbb\" (UID: \"014c41e7-892d-4fbc-ad4b-f2cd257e83b3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-vxnbb" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.920036 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/dd597a8b-7d1b-45bb-9360-cf20173b91c9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-z72bq\" (UID: \"dd597a8b-7d1b-45bb-9360-cf20173b91c9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z72bq" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.920883 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/edeb5b7f-b7b2-4b21-a634-f9113bbe9487-metrics-tls\") pod \"dns-default-dnk6l\" (UID: \"edeb5b7f-b7b2-4b21-a634-f9113bbe9487\") " pod="openshift-dns/dns-default-dnk6l" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.923552 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.927552 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a91bd0cb-9575-41db-ac31-b2eef142f4da-node-bootstrap-token\") pod \"machine-config-server-hxjhm\" (UID: \"a91bd0cb-9575-41db-ac31-b2eef142f4da\") " pod="openshift-machine-config-operator/machine-config-server-hxjhm" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.928628 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28b80315-d885-48d0-b39a-2cb9620c5a71-cert\") pod \"ingress-canary-gg274\" (UID: \"28b80315-d885-48d0-b39a-2cb9620c5a71\") " pod="openshift-ingress-canary/ingress-canary-gg274" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.928234 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7fddbf1c-72d3-474b-b262-e852d4ea917b-signing-cabundle\") pod \"service-ca-74545575db-wp2cx\" (UID: \"7fddbf1c-72d3-474b-b262-e852d4ea917b\") " pod="openshift-service-ca/service-ca-74545575db-wp2cx" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.929776 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbc81518-f2e1-452d-a57e-d52678bf4359-serving-cert\") pod \"service-ca-operator-5b9c976747-8bnxz\" (UID: \"fbc81518-f2e1-452d-a57e-d52678bf4359\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8bnxz" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.930029 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/014c41e7-892d-4fbc-ad4b-f2cd257e83b3-apiservice-cert\") pod \"packageserver-7d4fc7d867-vxnbb\" (UID: \"014c41e7-892d-4fbc-ad4b-f2cd257e83b3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-vxnbb" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.932818 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a91bd0cb-9575-41db-ac31-b2eef142f4da-certs\") pod \"machine-config-server-hxjhm\" (UID: \"a91bd0cb-9575-41db-ac31-b2eef142f4da\") " pod="openshift-machine-config-operator/machine-config-server-hxjhm" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.934443 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b73fffad-3220-4b21-9fd2-046191bf30ab-serving-cert\") pod \"route-controller-manager-776cdc94d6-gzfvl\" (UID: \"b73fffad-3220-4b21-9fd2-046191bf30ab\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gzfvl" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.935258 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kb5vt"] Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.956702 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.962323 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 10 15:48:15 crc kubenswrapper[5114]: I1210 15:48:15.999583 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mr6mk"] Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.000992 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.009129 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9rpp\" (UniqueName: \"kubernetes.io/projected/c19a2b06-50e9-4cb1-a04f-a495644f4cb1-kube-api-access-l9rpp\") pod \"console-operator-67c89758df-cfjf4\" (UID: \"c19a2b06-50e9-4cb1-a04f-a495644f4cb1\") " pod="openshift-console-operator/console-operator-67c89758df-cfjf4" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.022329 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.023464 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.023805 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:16.523761702 +0000 UTC m=+122.244562879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.024902 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.025236 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:16.525221229 +0000 UTC m=+122.246022406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.030927 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvbw2\" (UniqueName: \"kubernetes.io/projected/6f642643-9482-4e17-b0f7-bd7bf530f5a1-kube-api-access-rvbw2\") pod \"migrator-866fcbc849-288ln\" (UID: \"6f642643-9482-4e17-b0f7-bd7bf530f5a1\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-288ln" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.042651 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.050708 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pk7t\" (UniqueName: \"kubernetes.io/projected/b32a5174-fc1f-4e6e-8173-414921f6d86f-kube-api-access-7pk7t\") pod \"router-default-68cf44c8b8-57xp7\" (UID: \"b32a5174-fc1f-4e6e-8173-414921f6d86f\") " pod="openshift-ingress/router-default-68cf44c8b8-57xp7" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.061071 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.064185 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-zqx8l"] Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.068958 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d695\" (UniqueName: \"kubernetes.io/projected/d68dcc8d-b977-44e9-a63c-1cee775b50f2-kube-api-access-8d695\") pod \"downloads-747b44746d-7nbcs\" (UID: \"d68dcc8d-b977-44e9-a63c-1cee775b50f2\") " pod="openshift-console/downloads-747b44746d-7nbcs" Dec 10 15:48:16 crc kubenswrapper[5114]: W1210 15:48:16.077976 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffd4ccf2_5090_485f_8b42_ca4c2c6f293d.slice/crio-93f963dc873ad6d62e828e5466362d30c52bd4298eabb0606fc79966b02862c0 WatchSource:0}: Error finding container 93f963dc873ad6d62e828e5466362d30c52bd4298eabb0606fc79966b02862c0: Status 404 returned error can't find the container with id 93f963dc873ad6d62e828e5466362d30c52bd4298eabb0606fc79966b02862c0 Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.082583 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.090233 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b2b61e86-45b8-4491-8236-f056a381a5ab-console-config\") pod \"console-64d44f6ddf-59hqn\" (UID: \"b2b61e86-45b8-4491-8236-f056a381a5ab\") " pod="openshift-console/console-64d44f6ddf-59hqn" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.095943 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp" event={"ID":"4064ac8e-a335-40db-a1d6-38f9e8838fbf","Type":"ContainerStarted","Data":"6ad4907cf4cdb583276755c8d4c4959a4d806505ef35f9edc1cdc29c1f69db8d"} Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.097130 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mr6mk" event={"ID":"9d8735f9-6304-4571-a4ef-490336afe153","Type":"ContainerStarted","Data":"877257be471e03d3146aee86b14a3e4643e2bd92a4aaf217f577c27ae0ef2976"} Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.098002 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kb5vt" event={"ID":"ffd4ccf2-5090-485f-8b42-ca4c2c6f293d","Type":"ContainerStarted","Data":"93f963dc873ad6d62e828e5466362d30c52bd4298eabb0606fc79966b02862c0"} Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.098665 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-zqx8l" event={"ID":"0342172d-59ba-477b-8044-ed71dabb4eed","Type":"ContainerStarted","Data":"f54bd8f72e510eb9f344723b4a263b01cf033fe1ea19cbcdfbe60edb5fd5a4ab"} Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.099934 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" event={"ID":"8b6e28a6-b1a9-4942-8457-e54258393016","Type":"ContainerStarted","Data":"d824b29047c964e5cf14493db2ef93c19abfd89833779c9a3e87583c122053cc"} Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.100676 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.115791 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0336e7c6-4749-46b7-8709-0b03b511147d-serving-cert\") pod \"kube-apiserver-operator-575994946d-dsvk5\" (UID: \"0336e7c6-4749-46b7-8709-0b03b511147d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-dsvk5" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.126687 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.126899 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:16.626878255 +0000 UTC m=+122.347679452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.127136 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.127896 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:16.62788335 +0000 UTC m=+122.348684537 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.161464 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.166604 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95096727-f31b-4fd3-914a-152df463c991-config\") pod \"machine-api-operator-755bb95488-db5ff\" (UID: \"95096727-f31b-4fd3-914a-152df463c991\") " pod="openshift-machine-api/machine-api-operator-755bb95488-db5ff" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.203428 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zn89\" (UniqueName: \"kubernetes.io/projected/b2b61e86-45b8-4491-8236-f056a381a5ab-kube-api-access-9zn89\") pod \"console-64d44f6ddf-59hqn\" (UID: \"b2b61e86-45b8-4491-8236-f056a381a5ab\") " pod="openshift-console/console-64d44f6ddf-59hqn" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.205861 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.217790 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/53ed4e9f-c2c3-47bc-a20b-42db16b6d57a-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-jzw4f\" (UID: \"53ed4e9f-c2c3-47bc-a20b-42db16b6d57a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jzw4f" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.221520 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.228560 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.229662 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:16.729637629 +0000 UTC m=+122.450438816 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.230864 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d00e7eb-f974-4213-90bd-aeef8bed3a8a-serving-cert\") pod \"openshift-config-operator-5777786469-v9phm\" (UID: \"7d00e7eb-f974-4213-90bd-aeef8bed3a8a\") " pod="openshift-config-operator/openshift-config-operator-5777786469-v9phm" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.241323 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.249347 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/64a2e767-3d9b-4af5-8889-ab3f2b41a071-installation-pull-secrets\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.261115 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.270575 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwbqb\" (UniqueName: \"kubernetes.io/projected/826bf927-48e7-4696-92d0-748f01cdc1a8-kube-api-access-lwbqb\") pod \"openshift-apiserver-operator-846cbfc458-x9hfx\" (UID: \"826bf927-48e7-4696-92d0-748f01cdc1a8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x9hfx" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.301234 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9z9h\" (UniqueName: \"kubernetes.io/projected/64a2e767-3d9b-4af5-8889-ab3f2b41a071-kube-api-access-s9z9h\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.330542 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.330895 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:16.830877845 +0000 UTC m=+122.551679022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.342485 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/64a2e767-3d9b-4af5-8889-ab3f2b41a071-bound-sa-token\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.343418 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.358850 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/64a2e767-3d9b-4af5-8889-ab3f2b41a071-registry-tls\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.363739 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.372433 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b2b61e86-45b8-4491-8236-f056a381a5ab-console-serving-cert\") pod \"console-64d44f6ddf-59hqn\" (UID: \"b2b61e86-45b8-4491-8236-f056a381a5ab\") " pod="openshift-console/console-64d44f6ddf-59hqn" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.384826 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.396159 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b2b61e86-45b8-4491-8236-f056a381a5ab-console-oauth-config\") pod \"console-64d44f6ddf-59hqn\" (UID: \"b2b61e86-45b8-4491-8236-f056a381a5ab\") " pod="openshift-console/console-64d44f6ddf-59hqn" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.403867 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.409991 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b51af6b1-547c-4709-b115-93e1173bca33-config\") pod \"machine-approver-54c688565-lwbkt\" (UID: \"b51af6b1-547c-4709-b115-93e1173bca33\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-lwbkt" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.423298 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.428879 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b2b61e86-45b8-4491-8236-f056a381a5ab-oauth-serving-cert\") pod \"console-64d44f6ddf-59hqn\" (UID: \"b2b61e86-45b8-4491-8236-f056a381a5ab\") " pod="openshift-console/console-64d44f6ddf-59hqn" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.434309 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.434682 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:16.934660996 +0000 UTC m=+122.655462173 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.444773 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.449830 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b2b61e86-45b8-4491-8236-f056a381a5ab-service-ca\") pod \"console-64d44f6ddf-59hqn\" (UID: \"b2b61e86-45b8-4491-8236-f056a381a5ab\") " pod="openshift-console/console-64d44f6ddf-59hqn" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.461155 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.473865 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/95096727-f31b-4fd3-914a-152df463c991-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-db5ff\" (UID: \"95096727-f31b-4fd3-914a-152df463c991\") " pod="openshift-machine-api/machine-api-operator-755bb95488-db5ff" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.489598 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.490600 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2b61e86-45b8-4491-8236-f056a381a5ab-trusted-ca-bundle\") pod \"console-64d44f6ddf-59hqn\" (UID: \"b2b61e86-45b8-4491-8236-f056a381a5ab\") " pod="openshift-console/console-64d44f6ddf-59hqn" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.516856 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn89q\" (UniqueName: \"kubernetes.io/projected/53ed4e9f-c2c3-47bc-a20b-42db16b6d57a-kube-api-access-fn89q\") pod \"machine-config-controller-f9cdd68f7-jzw4f\" (UID: \"53ed4e9f-c2c3-47bc-a20b-42db16b6d57a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jzw4f" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.524410 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.531892 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0336e7c6-4749-46b7-8709-0b03b511147d-config\") pod \"kube-apiserver-operator-575994946d-dsvk5\" (UID: \"0336e7c6-4749-46b7-8709-0b03b511147d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-dsvk5" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.541009 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.541536 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:17.041517973 +0000 UTC m=+122.762319140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.542190 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.555344 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b51af6b1-547c-4709-b115-93e1173bca33-machine-approver-tls\") pod \"machine-approver-54c688565-lwbkt\" (UID: \"b51af6b1-547c-4709-b115-93e1173bca33\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-lwbkt" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.561429 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.572424 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b51af6b1-547c-4709-b115-93e1173bca33-auth-proxy-config\") pod \"machine-approver-54c688565-lwbkt\" (UID: \"b51af6b1-547c-4709-b115-93e1173bca33\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-lwbkt" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.608654 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.613538 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/64a2e767-3d9b-4af5-8889-ab3f2b41a071-trusted-ca\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.622469 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd597a8b-7d1b-45bb-9360-cf20173b91c9-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-z72bq\" (UID: \"dd597a8b-7d1b-45bb-9360-cf20173b91c9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z72bq" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.623100 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.633071 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/95096727-f31b-4fd3-914a-152df463c991-images\") pod \"machine-api-operator-755bb95488-db5ff\" (UID: \"95096727-f31b-4fd3-914a-152df463c991\") " pod="openshift-machine-api/machine-api-operator-755bb95488-db5ff" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.641031 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.641855 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.642340 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:17.142317858 +0000 UTC m=+122.863119035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.650788 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg7wd\" (UniqueName: \"kubernetes.io/projected/8803937b-0d28-40bc-bdb9-12ea0b8d003c-kube-api-access-bg7wd\") pod \"oauth-openshift-66458b6674-qxtmf\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.662363 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.674034 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqd4n\" (UniqueName: \"kubernetes.io/projected/c910b353-a094-46d1-9980-657a309d9050-kube-api-access-mqd4n\") pod \"etcd-operator-69b85846b6-tsm29\" (UID: \"c910b353-a094-46d1-9980-657a309d9050\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-tsm29" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.682109 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.688816 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmtsb\" (UniqueName: \"kubernetes.io/projected/64fd8522-fc45-4417-8a06-59b34f001433-kube-api-access-gmtsb\") pod \"kube-storage-version-migrator-operator-565b79b866-4fv4w\" (UID: \"64fd8522-fc45-4417-8a06-59b34f001433\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fv4w" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.745059 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.745570 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:17.245527594 +0000 UTC m=+122.966328771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.787022 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bspn9\" (UniqueName: \"kubernetes.io/projected/dd597a8b-7d1b-45bb-9360-cf20173b91c9-kube-api-access-bspn9\") pod \"cluster-image-registry-operator-86c45576b9-z72bq\" (UID: \"dd597a8b-7d1b-45bb-9360-cf20173b91c9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z72bq" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.787679 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.788811 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dd597a8b-7d1b-45bb-9360-cf20173b91c9-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-z72bq\" (UID: \"dd597a8b-7d1b-45bb-9360-cf20173b91c9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z72bq" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.792097 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7d243eb-5e31-4635-803d-2408fe9f8575-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-d6hj2\" (UID: \"c7d243eb-5e31-4635-803d-2408fe9f8575\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-d6hj2" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.816162 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksrx9\" (UniqueName: \"kubernetes.io/projected/edeb5b7f-b7b2-4b21-a634-f9113bbe9487-kube-api-access-ksrx9\") pod \"dns-default-dnk6l\" (UID: \"edeb5b7f-b7b2-4b21-a634-f9113bbe9487\") " pod="openshift-dns/dns-default-dnk6l" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.842024 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmf2c\" (UniqueName: \"kubernetes.io/projected/a91bd0cb-9575-41db-ac31-b2eef142f4da-kube-api-access-fmf2c\") pod \"machine-config-server-hxjhm\" (UID: \"a91bd0cb-9575-41db-ac31-b2eef142f4da\") " pod="openshift-machine-config-operator/machine-config-server-hxjhm" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.846621 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.846749 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:17.346723059 +0000 UTC m=+123.067524236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.847233 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.847696 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:17.347688743 +0000 UTC m=+123.068489920 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.866094 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvblc\" (UniqueName: \"kubernetes.io/projected/b73fffad-3220-4b21-9fd2-046191bf30ab-kube-api-access-xvblc\") pod \"route-controller-manager-776cdc94d6-gzfvl\" (UID: \"b73fffad-3220-4b21-9fd2-046191bf30ab\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gzfvl" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.884421 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.894487 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7d243eb-5e31-4635-803d-2408fe9f8575-serving-cert\") pod \"controller-manager-65b6cccf98-d6hj2\" (UID: \"c7d243eb-5e31-4635-803d-2408fe9f8575\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-d6hj2" Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.911146 5114 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.911446 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6e3098c-67bd-4d09-b1f5-04309f94d5ac-profile-collector-cert podName:d6e3098c-67bd-4d09-b1f5-04309f94d5ac nodeName:}" failed. No retries permitted until 2025-12-10 15:48:17.411420232 +0000 UTC m=+123.132221409 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/d6e3098c-67bd-4d09-b1f5-04309f94d5ac-profile-collector-cert") pod "olm-operator-5cdf44d969-2bl74" (UID: "d6e3098c-67bd-4d09-b1f5-04309f94d5ac") : failed to sync secret cache: timed out waiting for the condition Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.911178 5114 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.911648 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/397fe639-b4d1-4a13-9327-50661b7f938a-profile-collector-cert podName:397fe639-b4d1-4a13-9327-50661b7f938a nodeName:}" failed. No retries permitted until 2025-12-10 15:48:17.411639658 +0000 UTC m=+123.132440835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/397fe639-b4d1-4a13-9327-50661b7f938a-profile-collector-cert") pod "catalog-operator-75ff9f647d-w2skq" (UID: "397fe639-b4d1-4a13-9327-50661b7f938a") : failed to sync secret cache: timed out waiting for the condition Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.913375 5114 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.913460 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cce5f28-0219-4980-b7bd-26cbfcbe6435-marketplace-trusted-ca podName:1cce5f28-0219-4980-b7bd-26cbfcbe6435 nodeName:}" failed. No retries permitted until 2025-12-10 15:48:17.413440743 +0000 UTC m=+123.134241920 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/1cce5f28-0219-4980-b7bd-26cbfcbe6435-marketplace-trusted-ca") pod "marketplace-operator-547dbd544d-wpjqd" (UID: "1cce5f28-0219-4980-b7bd-26cbfcbe6435") : failed to sync configmap cache: timed out waiting for the condition Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.914469 5114 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.914577 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c7d243eb-5e31-4635-803d-2408fe9f8575-client-ca podName:c7d243eb-5e31-4635-803d-2408fe9f8575 nodeName:}" failed. No retries permitted until 2025-12-10 15:48:17.414552081 +0000 UTC m=+123.135353328 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c7d243eb-5e31-4635-803d-2408fe9f8575-client-ca") pod "controller-manager-65b6cccf98-d6hj2" (UID: "c7d243eb-5e31-4635-803d-2408fe9f8575") : failed to sync configmap cache: timed out waiting for the condition Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.914480 5114 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.914645 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/397fe639-b4d1-4a13-9327-50661b7f938a-srv-cert podName:397fe639-b4d1-4a13-9327-50661b7f938a nodeName:}" failed. No retries permitted until 2025-12-10 15:48:17.414629913 +0000 UTC m=+123.135431090 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/397fe639-b4d1-4a13-9327-50661b7f938a-srv-cert") pod "catalog-operator-75ff9f647d-w2skq" (UID: "397fe639-b4d1-4a13-9327-50661b7f938a") : failed to sync secret cache: timed out waiting for the condition Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.915583 5114 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.915638 5114 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.915723 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2bf2fd29-b4b2-4669-a9a8-99c061aa98c8-control-plane-machine-set-operator-tls podName:2bf2fd29-b4b2-4669-a9a8-99c061aa98c8 nodeName:}" failed. No retries permitted until 2025-12-10 15:48:17.415708801 +0000 UTC m=+123.136509978 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/2bf2fd29-b4b2-4669-a9a8-99c061aa98c8-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-75ffdb6fcd-dslgq" (UID: "2bf2fd29-b4b2-4669-a9a8-99c061aa98c8") : failed to sync secret cache: timed out waiting for the condition Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.915762 5114 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.915828 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cce5f28-0219-4980-b7bd-26cbfcbe6435-marketplace-operator-metrics podName:1cce5f28-0219-4980-b7bd-26cbfcbe6435 nodeName:}" failed. No retries permitted until 2025-12-10 15:48:17.415816193 +0000 UTC m=+123.136617370 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/1cce5f28-0219-4980-b7bd-26cbfcbe6435-marketplace-operator-metrics") pod "marketplace-operator-547dbd544d-wpjqd" (UID: "1cce5f28-0219-4980-b7bd-26cbfcbe6435") : failed to sync secret cache: timed out waiting for the condition Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.915860 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6e3098c-67bd-4d09-b1f5-04309f94d5ac-srv-cert podName:d6e3098c-67bd-4d09-b1f5-04309f94d5ac nodeName:}" failed. No retries permitted until 2025-12-10 15:48:17.415852794 +0000 UTC m=+123.136654081 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d6e3098c-67bd-4d09-b1f5-04309f94d5ac-srv-cert") pod "olm-operator-5cdf44d969-2bl74" (UID: "d6e3098c-67bd-4d09-b1f5-04309f94d5ac") : failed to sync secret cache: timed out waiting for the condition Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.918164 5114 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.918253 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79e5de70-9480-4091-8467-73e7b3d12424-secret-volume podName:79e5de70-9480-4091-8467-73e7b3d12424 nodeName:}" failed. No retries permitted until 2025-12-10 15:48:17.418234394 +0000 UTC m=+123.139035571 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-volume" (UniqueName: "kubernetes.io/secret/79e5de70-9480-4091-8467-73e7b3d12424-secret-volume") pod "collect-profiles-29423025-zw42q" (UID: "79e5de70-9480-4091-8467-73e7b3d12424") : failed to sync secret cache: timed out waiting for the condition Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.918177 5114 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.918342 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c7d243eb-5e31-4635-803d-2408fe9f8575-config podName:c7d243eb-5e31-4635-803d-2408fe9f8575 nodeName:}" failed. No retries permitted until 2025-12-10 15:48:17.418324717 +0000 UTC m=+123.139125894 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c7d243eb-5e31-4635-803d-2408fe9f8575-config") pod "controller-manager-65b6cccf98-d6hj2" (UID: "c7d243eb-5e31-4635-803d-2408fe9f8575") : failed to sync configmap cache: timed out waiting for the condition Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.919659 5114 request.go:752] "Waited before sending request" delay="1.007042345s" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/serviceaccounts/default/token" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.923920 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdpwv\" (UniqueName: \"kubernetes.io/projected/2e757457-618f-4625-8008-3cb8989aa882-kube-api-access-rdpwv\") pod \"csi-hostpathplugin-j45nf\" (UID: \"2e757457-618f-4625-8008-3cb8989aa882\") " pod="hostpath-provisioner/csi-hostpathplugin-j45nf" Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.924803 5114 projected.go:289] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.924925 5114 projected.go:194] Error preparing data for projected volume kube-api-access-45k7m for pod openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wx9kv: failed to sync configmap cache: timed out waiting for the condition Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.925049 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/00c50168-1c40-4c3d-9a03-c99c13223df8-kube-api-access-45k7m podName:00c50168-1c40-4c3d-9a03-c99c13223df8 nodeName:}" failed. No retries permitted until 2025-12-10 15:48:17.425030296 +0000 UTC m=+123.145831473 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-45k7m" (UniqueName: "kubernetes.io/projected/00c50168-1c40-4c3d-9a03-c99c13223df8-kube-api-access-45k7m") pod "ingress-operator-6b9cb4dbcf-wx9kv" (UID: "00c50168-1c40-4c3d-9a03-c99c13223df8") : failed to sync configmap cache: timed out waiting for the condition Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.937102 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gzfvl" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.941691 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.941765 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8cms\" (UniqueName: \"kubernetes.io/projected/28b80315-d885-48d0-b39a-2cb9620c5a71-kube-api-access-j8cms\") pod \"ingress-canary-gg274\" (UID: \"28b80315-d885-48d0-b39a-2cb9620c5a71\") " pod="openshift-ingress-canary/ingress-canary-gg274" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.946197 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z72bq" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.948740 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.948867 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:17.448848287 +0000 UTC m=+123.169649464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.949462 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.949717 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:17.449709059 +0000 UTC m=+123.170510226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.964150 5114 projected.go:289] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.965061 5114 projected.go:194] Error preparing data for projected volume kube-api-access-2b5lc for pod openshift-dns-operator/dns-operator-799b87ffcd-j6t46: failed to sync configmap cache: timed out waiting for the condition Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.965227 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/36137111-458a-4f99-bcbf-6606f80d8ee0-kube-api-access-2b5lc podName:36137111-458a-4f99-bcbf-6606f80d8ee0 nodeName:}" failed. No retries permitted until 2025-12-10 15:48:17.4651962 +0000 UTC m=+123.185997367 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2b5lc" (UniqueName: "kubernetes.io/projected/36137111-458a-4f99-bcbf-6606f80d8ee0-kube-api-access-2b5lc") pod "dns-operator-799b87ffcd-j6t46" (UID: "36137111-458a-4f99-bcbf-6606f80d8ee0") : failed to sync configmap cache: timed out waiting for the condition Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.967983 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-gg274" Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.971959 5114 projected.go:289] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.971985 5114 projected.go:194] Error preparing data for projected volume kube-api-access-t2vsv for pod openshift-authentication-operator/authentication-operator-7f5c659b84-nrpbd: failed to sync configmap cache: timed out waiting for the condition Dec 10 15:48:16 crc kubenswrapper[5114]: E1210 15:48:16.972063 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/81a2a5e5-1a13-4e0d-81a7-868716149070-kube-api-access-t2vsv podName:81a2a5e5-1a13-4e0d-81a7-868716149070 nodeName:}" failed. No retries permitted until 2025-12-10 15:48:17.472040343 +0000 UTC m=+123.192841520 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t2vsv" (UniqueName: "kubernetes.io/projected/81a2a5e5-1a13-4e0d-81a7-868716149070-kube-api-access-t2vsv") pod "authentication-operator-7f5c659b84-nrpbd" (UID: "81a2a5e5-1a13-4e0d-81a7-868716149070") : failed to sync configmap cache: timed out waiting for the condition Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.986450 5114 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wbclb" secret="" err="failed to sync secret cache: timed out waiting for the condition" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.986525 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wbclb" Dec 10 15:48:16 crc kubenswrapper[5114]: I1210 15:48:16.990827 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-dnk6l" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.003848 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-hxjhm" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.029970 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-j45nf" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.051782 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:17 crc kubenswrapper[5114]: E1210 15:48:17.052131 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:17.552102974 +0000 UTC m=+123.272904151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.052479 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:17 crc kubenswrapper[5114]: E1210 15:48:17.052833 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:17.552826202 +0000 UTC m=+123.273627379 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.054792 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.054967 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c27dl\" (UniqueName: \"kubernetes.io/projected/7fddbf1c-72d3-474b-b262-e852d4ea917b-kube-api-access-c27dl\") pod \"service-ca-74545575db-wp2cx\" (UID: \"7fddbf1c-72d3-474b-b262-e852d4ea917b\") " pod="openshift-service-ca/service-ca-74545575db-wp2cx" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.067841 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.086184 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.117066 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.120077 5114 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-console-operator/console-operator-67c89758df-cfjf4" secret="" err="failed to sync secret cache: timed out waiting for the condition" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.120134 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-cfjf4" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.121034 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.125372 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-zqx8l" event={"ID":"0342172d-59ba-477b-8044-ed71dabb4eed","Type":"ContainerStarted","Data":"70c861e51a33575a6a78e3431d8e5d1fa6b5139c108f8c08348c602f0cc04df9"} Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.125420 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-zqx8l" event={"ID":"0342172d-59ba-477b-8044-ed71dabb4eed","Type":"ContainerStarted","Data":"65863c1a9f40388a9dd96e929ae98f12b0fa4ad37ccbe031f1c8620f3e28fef9"} Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.135166 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" event={"ID":"8b6e28a6-b1a9-4942-8457-e54258393016","Type":"ContainerStarted","Data":"e1bfecb70421fb78a1696a421949fcbe6cb5f56308964f396334f58a12280ecd"} Dec 10 15:48:17 crc kubenswrapper[5114]: E1210 15:48:17.137463 5114 projected.go:289] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 10 15:48:17 crc kubenswrapper[5114]: E1210 15:48:17.137500 5114 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-dsvk5: failed to sync configmap cache: timed out waiting for the condition Dec 10 15:48:17 crc kubenswrapper[5114]: E1210 15:48:17.137578 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0336e7c6-4749-46b7-8709-0b03b511147d-kube-api-access podName:0336e7c6-4749-46b7-8709-0b03b511147d nodeName:}" failed. No retries permitted until 2025-12-10 15:48:17.637555551 +0000 UTC m=+123.358356728 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/0336e7c6-4749-46b7-8709-0b03b511147d-kube-api-access") pod "kube-apiserver-operator-575994946d-dsvk5" (UID: "0336e7c6-4749-46b7-8709-0b03b511147d") : failed to sync configmap cache: timed out waiting for the condition Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.140757 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mr6mk" event={"ID":"9d8735f9-6304-4571-a4ef-490336afe153","Type":"ContainerStarted","Data":"d0e99d7253490e5e711c7236a8e4377be4ae773c8b27cd98aa133e448557e97a"} Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.140826 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mr6mk" event={"ID":"9d8735f9-6304-4571-a4ef-490336afe153","Type":"ContainerStarted","Data":"b82ea89140d6b6eae3ff43cc439258715a1b0c2f3493bd8616498417e636a274"} Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.142560 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kb5vt" event={"ID":"ffd4ccf2-5090-485f-8b42-ca4c2c6f293d","Type":"ContainerStarted","Data":"85268897651f78be29927aae218628b393d057f3cb91fdb9122e0fefc7c730c4"} Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.145540 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-hxjhm" event={"ID":"a91bd0cb-9575-41db-ac31-b2eef142f4da","Type":"ContainerStarted","Data":"e98d48d22b0ddc708530e9ee923007d6052d705d827d14fb90a755698de3365f"} Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.155091 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:17 crc kubenswrapper[5114]: E1210 15:48:17.155325 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:17.655288759 +0000 UTC m=+123.376089936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.155332 5114 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-console/downloads-747b44746d-7nbcs" secret="" err="failed to sync secret cache: timed out waiting for the condition" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.155430 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-7nbcs" Dec 10 15:48:17 crc kubenswrapper[5114]: E1210 15:48:17.155564 5114 projected.go:289] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.155755 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:17 crc kubenswrapper[5114]: E1210 15:48:17.156640 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:17.656629793 +0000 UTC m=+123.377430970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.161452 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.168317 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwt84\" (UniqueName: \"kubernetes.io/projected/4ff01055-87cd-4379-ba86-8778485be566-kube-api-access-zwt84\") pod \"cni-sysctl-allowlist-ds-55xzh\" (UID: \"4ff01055-87cd-4379-ba86-8778485be566\") " pod="openshift-multus/cni-sysctl-allowlist-ds-55xzh" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.215134 5114 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-288ln" secret="" err="failed to sync secret cache: timed out waiting for the condition" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.215196 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-288ln" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.215339 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhsrs\" (UniqueName: \"kubernetes.io/projected/fbc81518-f2e1-452d-a57e-d52678bf4359-kube-api-access-fhsrs\") pod \"service-ca-operator-5b9c976747-8bnxz\" (UID: \"fbc81518-f2e1-452d-a57e-d52678bf4359\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8bnxz" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.246370 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.255165 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8bnxz" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.256534 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:17 crc kubenswrapper[5114]: E1210 15:48:17.256664 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:17.756637388 +0000 UTC m=+123.477438565 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.257051 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:17 crc kubenswrapper[5114]: E1210 15:48:17.257845 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:17.757834058 +0000 UTC m=+123.478635235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.261454 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.281927 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-wp2cx" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.281935 5114 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-ingress/router-default-68cf44c8b8-57xp7" secret="" err="failed to sync secret cache: timed out waiting for the condition" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.282114 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-57xp7" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.282672 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.301742 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.313633 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-55xzh" Dec 10 15:48:17 crc kubenswrapper[5114]: E1210 15:48:17.321717 5114 projected.go:289] Couldn't get configMap openshift-cluster-machine-approver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.324116 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-gzfvl"] Dec 10 15:48:17 crc kubenswrapper[5114]: W1210 15:48:17.333002 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb32a5174_fc1f_4e6e_8173_414921f6d86f.slice/crio-c16c344dad833c41ace8b7d276cf0808345f6d5b2f95efb394b20cdd8db72d87 WatchSource:0}: Error finding container c16c344dad833c41ace8b7d276cf0808345f6d5b2f95efb394b20cdd8db72d87: Status 404 returned error can't find the container with id c16c344dad833c41ace8b7d276cf0808345f6d5b2f95efb394b20cdd8db72d87 Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.338742 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.359474 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:17 crc kubenswrapper[5114]: E1210 15:48:17.359867 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:17.859830123 +0000 UTC m=+123.580631300 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.360250 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.363741 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 10 15:48:17 crc kubenswrapper[5114]: E1210 15:48:17.364342 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:17.864260595 +0000 UTC m=+123.585061772 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.378163 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-z72bq"] Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.393227 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.406643 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.423341 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.426574 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-j45nf"] Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.441191 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.461457 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.461695 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:17 crc kubenswrapper[5114]: E1210 15:48:17.461915 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:17.96188932 +0000 UTC m=+123.682690507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.462133 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7d243eb-5e31-4635-803d-2408fe9f8575-client-ca\") pod \"controller-manager-65b6cccf98-d6hj2\" (UID: \"c7d243eb-5e31-4635-803d-2408fe9f8575\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-d6hj2" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.462185 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d6e3098c-67bd-4d09-b1f5-04309f94d5ac-profile-collector-cert\") pod \"olm-operator-5cdf44d969-2bl74\" (UID: \"d6e3098c-67bd-4d09-b1f5-04309f94d5ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-2bl74" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.462650 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1cce5f28-0219-4980-b7bd-26cbfcbe6435-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-wpjqd\" (UID: \"1cce5f28-0219-4980-b7bd-26cbfcbe6435\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-wpjqd" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.462753 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/397fe639-b4d1-4a13-9327-50661b7f938a-srv-cert\") pod \"catalog-operator-75ff9f647d-w2skq\" (UID: \"397fe639-b4d1-4a13-9327-50661b7f938a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-w2skq" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.463671 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d6e3098c-67bd-4d09-b1f5-04309f94d5ac-srv-cert\") pod \"olm-operator-5cdf44d969-2bl74\" (UID: \"d6e3098c-67bd-4d09-b1f5-04309f94d5ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-2bl74" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.464042 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1cce5f28-0219-4980-b7bd-26cbfcbe6435-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-wpjqd\" (UID: \"1cce5f28-0219-4980-b7bd-26cbfcbe6435\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-wpjqd" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.463673 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1cce5f28-0219-4980-b7bd-26cbfcbe6435-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-wpjqd\" (UID: \"1cce5f28-0219-4980-b7bd-26cbfcbe6435\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-wpjqd" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.464167 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7d243eb-5e31-4635-803d-2408fe9f8575-client-ca\") pod \"controller-manager-65b6cccf98-d6hj2\" (UID: \"c7d243eb-5e31-4635-803d-2408fe9f8575\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-d6hj2" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.464089 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-45k7m\" (UniqueName: \"kubernetes.io/projected/00c50168-1c40-4c3d-9a03-c99c13223df8-kube-api-access-45k7m\") pod \"ingress-operator-6b9cb4dbcf-wx9kv\" (UID: \"00c50168-1c40-4c3d-9a03-c99c13223df8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wx9kv" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.464393 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7d243eb-5e31-4635-803d-2408fe9f8575-config\") pod \"controller-manager-65b6cccf98-d6hj2\" (UID: \"c7d243eb-5e31-4635-803d-2408fe9f8575\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-d6hj2" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.464433 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/79e5de70-9480-4091-8467-73e7b3d12424-secret-volume\") pod \"collect-profiles-29423025-zw42q\" (UID: \"79e5de70-9480-4091-8467-73e7b3d12424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-zw42q" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.464533 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.464602 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2bf2fd29-b4b2-4669-a9a8-99c061aa98c8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-dslgq\" (UID: \"2bf2fd29-b4b2-4669-a9a8-99c061aa98c8\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dslgq" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.464656 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/397fe639-b4d1-4a13-9327-50661b7f938a-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-w2skq\" (UID: \"397fe639-b4d1-4a13-9327-50661b7f938a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-w2skq" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.466340 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7d243eb-5e31-4635-803d-2408fe9f8575-config\") pod \"controller-manager-65b6cccf98-d6hj2\" (UID: \"c7d243eb-5e31-4635-803d-2408fe9f8575\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-d6hj2" Dec 10 15:48:17 crc kubenswrapper[5114]: E1210 15:48:17.467924 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:17.965210934 +0000 UTC m=+123.686012111 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.469173 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d6e3098c-67bd-4d09-b1f5-04309f94d5ac-srv-cert\") pod \"olm-operator-5cdf44d969-2bl74\" (UID: \"d6e3098c-67bd-4d09-b1f5-04309f94d5ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-2bl74" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.469410 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2bf2fd29-b4b2-4669-a9a8-99c061aa98c8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-dslgq\" (UID: \"2bf2fd29-b4b2-4669-a9a8-99c061aa98c8\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dslgq" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.469744 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d6e3098c-67bd-4d09-b1f5-04309f94d5ac-profile-collector-cert\") pod \"olm-operator-5cdf44d969-2bl74\" (UID: \"d6e3098c-67bd-4d09-b1f5-04309f94d5ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-2bl74" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.471019 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/397fe639-b4d1-4a13-9327-50661b7f938a-srv-cert\") pod \"catalog-operator-75ff9f647d-w2skq\" (UID: \"397fe639-b4d1-4a13-9327-50661b7f938a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-w2skq" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.471795 5114 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x9hfx" secret="" err="failed to sync secret cache: timed out waiting for the condition" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.471848 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x9hfx" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.472420 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/79e5de70-9480-4091-8467-73e7b3d12424-secret-volume\") pod \"collect-profiles-29423025-zw42q\" (UID: \"79e5de70-9480-4091-8467-73e7b3d12424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-zw42q" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.474505 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/397fe639-b4d1-4a13-9327-50661b7f938a-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-w2skq\" (UID: \"397fe639-b4d1-4a13-9327-50661b7f938a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-w2skq" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.475185 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1cce5f28-0219-4980-b7bd-26cbfcbe6435-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-wpjqd\" (UID: \"1cce5f28-0219-4980-b7bd-26cbfcbe6435\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-wpjqd" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.475864 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-45k7m\" (UniqueName: \"kubernetes.io/projected/00c50168-1c40-4c3d-9a03-c99c13223df8-kube-api-access-45k7m\") pod \"ingress-operator-6b9cb4dbcf-wx9kv\" (UID: \"00c50168-1c40-4c3d-9a03-c99c13223df8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wx9kv" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.481074 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.502656 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.504749 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-gg274"] Dec 10 15:48:17 crc kubenswrapper[5114]: W1210 15:48:17.519306 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28b80315_d885_48d0_b39a_2cb9620c5a71.slice/crio-39a9e778b3c1ae2579d4b5e6ceeecbbd00a408d9aaaf45e545888697206bcde2 WatchSource:0}: Error finding container 39a9e778b3c1ae2579d4b5e6ceeecbbd00a408d9aaaf45e545888697206bcde2: Status 404 returned error can't find the container with id 39a9e778b3c1ae2579d4b5e6ceeecbbd00a408d9aaaf45e545888697206bcde2 Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.522675 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.542441 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.551138 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jzw4f" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.562219 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.566590 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.566823 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2b5lc\" (UniqueName: \"kubernetes.io/projected/36137111-458a-4f99-bcbf-6606f80d8ee0-kube-api-access-2b5lc\") pod \"dns-operator-799b87ffcd-j6t46\" (UID: \"36137111-458a-4f99-bcbf-6606f80d8ee0\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-j6t46" Dec 10 15:48:17 crc kubenswrapper[5114]: E1210 15:48:17.566921 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:18.066829039 +0000 UTC m=+123.787630216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.567126 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t2vsv\" (UniqueName: \"kubernetes.io/projected/81a2a5e5-1a13-4e0d-81a7-868716149070-kube-api-access-t2vsv\") pod \"authentication-operator-7f5c659b84-nrpbd\" (UID: \"81a2a5e5-1a13-4e0d-81a7-868716149070\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nrpbd" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.567982 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-59hqn" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.570912 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b5lc\" (UniqueName: \"kubernetes.io/projected/36137111-458a-4f99-bcbf-6606f80d8ee0-kube-api-access-2b5lc\") pod \"dns-operator-799b87ffcd-j6t46\" (UID: \"36137111-458a-4f99-bcbf-6606f80d8ee0\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-j6t46" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.571719 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2vsv\" (UniqueName: \"kubernetes.io/projected/81a2a5e5-1a13-4e0d-81a7-868716149070-kube-api-access-t2vsv\") pod \"authentication-operator-7f5c659b84-nrpbd\" (UID: \"81a2a5e5-1a13-4e0d-81a7-868716149070\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nrpbd" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.582644 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-8bnxz"] Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.583789 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.587442 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-dnk6l"] Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.596266 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wbclb"] Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.605470 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.609497 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fv4w" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.623228 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.644152 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.649882 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-tsm29" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.662719 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.664109 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-cfjf4"] Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.664502 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x9hfx"] Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.670381 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.671048 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.671229 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0336e7c6-4749-46b7-8709-0b03b511147d-kube-api-access\") pod \"kube-apiserver-operator-575994946d-dsvk5\" (UID: \"0336e7c6-4749-46b7-8709-0b03b511147d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-dsvk5" Dec 10 15:48:17 crc kubenswrapper[5114]: E1210 15:48:17.671674 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:18.171616165 +0000 UTC m=+123.892417342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.681905 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.683072 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0336e7c6-4749-46b7-8709-0b03b511147d-kube-api-access\") pod \"kube-apiserver-operator-575994946d-dsvk5\" (UID: \"0336e7c6-4749-46b7-8709-0b03b511147d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-dsvk5" Dec 10 15:48:17 crc kubenswrapper[5114]: E1210 15:48:17.686383 5114 projected.go:194] Error preparing data for projected volume kube-api-access-ltlkb for pod openshift-config-operator/openshift-config-operator-5777786469-v9phm: failed to sync configmap cache: timed out waiting for the condition Dec 10 15:48:17 crc kubenswrapper[5114]: E1210 15:48:17.686592 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d00e7eb-f974-4213-90bd-aeef8bed3a8a-kube-api-access-ltlkb podName:7d00e7eb-f974-4213-90bd-aeef8bed3a8a nodeName:}" failed. No retries permitted until 2025-12-10 15:48:18.186550532 +0000 UTC m=+123.907351709 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ltlkb" (UniqueName: "kubernetes.io/projected/7d00e7eb-f974-4213-90bd-aeef8bed3a8a-kube-api-access-ltlkb") pod "openshift-config-operator-5777786469-v9phm" (UID: "7d00e7eb-f974-4213-90bd-aeef8bed3a8a") : failed to sync configmap cache: timed out waiting for the condition Dec 10 15:48:17 crc kubenswrapper[5114]: W1210 15:48:17.702756 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod826bf927_48e7_4696_92d0_748f01cdc1a8.slice/crio-645f179978dec417b32fa9c71035d506e10d851805416f3ab40d50314036302a WatchSource:0}: Error finding container 645f179978dec417b32fa9c71035d506e10d851805416f3ab40d50314036302a: Status 404 returned error can't find the container with id 645f179978dec417b32fa9c71035d506e10d851805416f3ab40d50314036302a Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.703628 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.721312 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 10 15:48:17 crc kubenswrapper[5114]: E1210 15:48:17.723466 5114 projected.go:194] Error preparing data for projected volume kube-api-access-nc7n5 for pod openshift-cluster-machine-approver/machine-approver-54c688565-lwbkt: failed to sync configmap cache: timed out waiting for the condition Dec 10 15:48:17 crc kubenswrapper[5114]: E1210 15:48:17.723550 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b51af6b1-547c-4709-b115-93e1173bca33-kube-api-access-nc7n5 podName:b51af6b1-547c-4709-b115-93e1173bca33 nodeName:}" failed. No retries permitted until 2025-12-10 15:48:18.223530966 +0000 UTC m=+123.944332143 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nc7n5" (UniqueName: "kubernetes.io/projected/b51af6b1-547c-4709-b115-93e1173bca33-kube-api-access-nc7n5") pod "machine-approver-54c688565-lwbkt" (UID: "b51af6b1-547c-4709-b115-93e1173bca33") : failed to sync configmap cache: timed out waiting for the condition Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.725198 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-288ln"] Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.732669 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-wp2cx"] Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.743207 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-7nbcs"] Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.765678 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.771194 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n47kc\" (UniqueName: \"kubernetes.io/projected/95096727-f31b-4fd3-914a-152df463c991-kube-api-access-n47kc\") pod \"machine-api-operator-755bb95488-db5ff\" (UID: \"95096727-f31b-4fd3-914a-152df463c991\") " pod="openshift-machine-api/machine-api-operator-755bb95488-db5ff" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.771947 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:17 crc kubenswrapper[5114]: E1210 15:48:17.772325 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:18.272309437 +0000 UTC m=+123.993110614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.782598 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hlbk\" (UniqueName: \"kubernetes.io/projected/2bf2fd29-b4b2-4669-a9a8-99c061aa98c8-kube-api-access-4hlbk\") pod \"control-plane-machine-set-operator-75ffdb6fcd-dslgq\" (UID: \"2bf2fd29-b4b2-4669-a9a8-99c061aa98c8\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dslgq" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.782919 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.791424 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmsbn\" (UniqueName: \"kubernetes.io/projected/397fe639-b4d1-4a13-9327-50661b7f938a-kube-api-access-wmsbn\") pod \"catalog-operator-75ff9f647d-w2skq\" (UID: \"397fe639-b4d1-4a13-9327-50661b7f938a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-w2skq" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.792490 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwb7v\" (UniqueName: \"kubernetes.io/projected/79e5de70-9480-4091-8467-73e7b3d12424-kube-api-access-nwb7v\") pod \"collect-profiles-29423025-zw42q\" (UID: \"79e5de70-9480-4091-8467-73e7b3d12424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-zw42q" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.793438 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8spz2\" (UniqueName: \"kubernetes.io/projected/26a9e3c3-f100-41fa-81ea-2790ebff1438-kube-api-access-8spz2\") pod \"package-server-manager-77f986bd66-lskwt\" (UID: \"26a9e3c3-f100-41fa-81ea-2790ebff1438\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-lskwt" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.797304 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-28vz2\" (UniqueName: \"kubernetes.io/projected/d6e3098c-67bd-4d09-b1f5-04309f94d5ac-kube-api-access-28vz2\") pod \"olm-operator-5cdf44d969-2bl74\" (UID: \"d6e3098c-67bd-4d09-b1f5-04309f94d5ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-2bl74" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.801346 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnh4h\" (UniqueName: \"kubernetes.io/projected/014c41e7-892d-4fbc-ad4b-f2cd257e83b3-kube-api-access-lnh4h\") pod \"packageserver-7d4fc7d867-vxnbb\" (UID: \"014c41e7-892d-4fbc-ad4b-f2cd257e83b3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-vxnbb" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.801864 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.808897 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wx9kv" Dec 10 15:48:17 crc kubenswrapper[5114]: W1210 15:48:17.817931 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd68dcc8d_b977_44e9_a63c_1cee775b50f2.slice/crio-9c5ca25ddf9c8214f23fe9b2200200f85cb6d91860b85c6753ceeb8fcd1193b6 WatchSource:0}: Error finding container 9c5ca25ddf9c8214f23fe9b2200200f85cb6d91860b85c6753ceeb8fcd1193b6: Status 404 returned error can't find the container with id 9c5ca25ddf9c8214f23fe9b2200200f85cb6d91860b85c6753ceeb8fcd1193b6 Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.822960 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.827613 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-zw42q" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.835594 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnghc\" (UniqueName: \"kubernetes.io/projected/c7d243eb-5e31-4635-803d-2408fe9f8575-kube-api-access-hnghc\") pod \"controller-manager-65b6cccf98-d6hj2\" (UID: \"c7d243eb-5e31-4635-803d-2408fe9f8575\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-d6hj2" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.842598 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.846422 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nrpbd" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.861422 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.864941 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-59hqn"] Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.872231 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gws25\" (UniqueName: \"kubernetes.io/projected/1cce5f28-0219-4980-b7bd-26cbfcbe6435-kube-api-access-gws25\") pod \"marketplace-operator-547dbd544d-wpjqd\" (UID: \"1cce5f28-0219-4980-b7bd-26cbfcbe6435\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-wpjqd" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.873013 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:17 crc kubenswrapper[5114]: E1210 15:48:17.873437 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:18.37342276 +0000 UTC m=+124.094223937 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.881754 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.885422 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-j6t46" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.891233 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fv4w"] Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.926516 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.929292 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-2bl74" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.936043 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-vxnbb" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.938101 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-lskwt" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.938199 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-tsm29"] Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.942413 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.948728 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-dsvk5" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.969154 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-qxtmf"] Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.974988 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:17 crc kubenswrapper[5114]: E1210 15:48:17.975192 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:18.475162769 +0000 UTC m=+124.195963956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.975583 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:17 crc kubenswrapper[5114]: E1210 15:48:17.976062 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:18.476046911 +0000 UTC m=+124.196848088 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.985349 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jzw4f"] Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.988428 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 10 15:48:17 crc kubenswrapper[5114]: I1210 15:48:17.990596 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-db5ff" Dec 10 15:48:18 crc kubenswrapper[5114]: W1210 15:48:18.018395 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8803937b_0d28_40bc_bdb9_12ea0b8d003c.slice/crio-def0380f8bc9f779f38e6fbb9252e6f286c486d2e6a4555e8a946ed6dea3f9be WatchSource:0}: Error finding container def0380f8bc9f779f38e6fbb9252e6f286c486d2e6a4555e8a946ed6dea3f9be: Status 404 returned error can't find the container with id def0380f8bc9f779f38e6fbb9252e6f286c486d2e6a4555e8a946ed6dea3f9be Dec 10 15:48:18 crc kubenswrapper[5114]: W1210 15:48:18.040628 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53ed4e9f_c2c3_47bc_a20b_42db16b6d57a.slice/crio-fb800d93f4245f6a35efa1a6b4ce350df53f4faf8035410ae0e1c34d790af61b WatchSource:0}: Error finding container fb800d93f4245f6a35efa1a6b4ce350df53f4faf8035410ae0e1c34d790af61b: Status 404 returned error can't find the container with id fb800d93f4245f6a35efa1a6b4ce350df53f4faf8035410ae0e1c34d790af61b Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.077844 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:18 crc kubenswrapper[5114]: E1210 15:48:18.077986 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:18.577961434 +0000 UTC m=+124.298762621 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.078195 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:18 crc kubenswrapper[5114]: E1210 15:48:18.079217 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:18.579198435 +0000 UTC m=+124.299999612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.084066 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.085161 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dslgq" Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.087840 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-w2skq" Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.101182 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.102519 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-wpjqd" Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.124606 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.127447 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-d6hj2" Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.163363 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-55xzh" event={"ID":"4ff01055-87cd-4379-ba86-8778485be566","Type":"ContainerStarted","Data":"cc27cd10ab01f98bcdad2ef48b6b965d99feaa7702e08123655eb76dd684f767"} Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.166179 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-wp2cx" event={"ID":"7fddbf1c-72d3-474b-b262-e852d4ea917b","Type":"ContainerStarted","Data":"d6b751b48ebbd409ec3692336f4670b28585a55c13c28b4cfb62241ac55c2e8d"} Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.191514 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-288ln" event={"ID":"6f642643-9482-4e17-b0f7-bd7bf530f5a1","Type":"ContainerStarted","Data":"a1095001dd5d76f389087412ab6f9f1be74d18346157ec5d28838ab2405f8a41"} Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.192529 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.192692 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ltlkb\" (UniqueName: \"kubernetes.io/projected/7d00e7eb-f974-4213-90bd-aeef8bed3a8a-kube-api-access-ltlkb\") pod \"openshift-config-operator-5777786469-v9phm\" (UID: \"7d00e7eb-f974-4213-90bd-aeef8bed3a8a\") " pod="openshift-config-operator/openshift-config-operator-5777786469-v9phm" Dec 10 15:48:18 crc kubenswrapper[5114]: E1210 15:48:18.194488 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:18.694466275 +0000 UTC m=+124.415267452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.202530 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltlkb\" (UniqueName: \"kubernetes.io/projected/7d00e7eb-f974-4213-90bd-aeef8bed3a8a-kube-api-access-ltlkb\") pod \"openshift-config-operator-5777786469-v9phm\" (UID: \"7d00e7eb-f974-4213-90bd-aeef8bed3a8a\") " pod="openshift-config-operator/openshift-config-operator-5777786469-v9phm" Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.218163 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z72bq" event={"ID":"dd597a8b-7d1b-45bb-9360-cf20173b91c9","Type":"ContainerStarted","Data":"68ec61c74efff7a7a854c3f53604c698e0a06032c82d3294584b2c00cadb2fc5"} Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.218205 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z72bq" event={"ID":"dd597a8b-7d1b-45bb-9360-cf20173b91c9","Type":"ContainerStarted","Data":"b3e39fe4f340674674f726930028d9ea24d3d407837e577b2ca7e07d4396ec6b"} Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.252596 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-tsm29" event={"ID":"c910b353-a094-46d1-9980-657a309d9050","Type":"ContainerStarted","Data":"e49219d7c2268c030fee49e9bc4768727d257f1eb3f8e5717832f97066cf8b95"} Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.264991 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.265404 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fv4w" event={"ID":"64fd8522-fc45-4417-8a06-59b34f001433","Type":"ContainerStarted","Data":"8d2a20e0700bbd8fd0c0347dea143f52c4fe8d3fd49df727285b7cffba6a58ac"} Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.267903 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-v9phm" Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.283919 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x9hfx" event={"ID":"826bf927-48e7-4696-92d0-748f01cdc1a8","Type":"ContainerStarted","Data":"b5b98ea1a45e9b6688d0384bba823479dd8cecb32c15b6e3fa88f10b39a7c2c7"} Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.283963 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x9hfx" event={"ID":"826bf927-48e7-4696-92d0-748f01cdc1a8","Type":"ContainerStarted","Data":"645f179978dec417b32fa9c71035d506e10d851805416f3ab40d50314036302a"} Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.285033 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-7nbcs" event={"ID":"d68dcc8d-b977-44e9-a63c-1cee775b50f2","Type":"ContainerStarted","Data":"9c5ca25ddf9c8214f23fe9b2200200f85cb6d91860b85c6753ceeb8fcd1193b6"} Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.294173 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nc7n5\" (UniqueName: \"kubernetes.io/projected/b51af6b1-547c-4709-b115-93e1173bca33-kube-api-access-nc7n5\") pod \"machine-approver-54c688565-lwbkt\" (UID: \"b51af6b1-547c-4709-b115-93e1173bca33\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-lwbkt" Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.294322 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:18 crc kubenswrapper[5114]: E1210 15:48:18.294697 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:18.794680165 +0000 UTC m=+124.515481342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.307098 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wx9kv"] Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.307815 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc7n5\" (UniqueName: \"kubernetes.io/projected/b51af6b1-547c-4709-b115-93e1173bca33-kube-api-access-nc7n5\") pod \"machine-approver-54c688565-lwbkt\" (UID: \"b51af6b1-547c-4709-b115-93e1173bca33\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-lwbkt" Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.377626 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-gg274" event={"ID":"28b80315-d885-48d0-b39a-2cb9620c5a71","Type":"ContainerStarted","Data":"02767c32bd1b471fd6806e0870df9bf6094b46ef954a5ef2e11d3f810ee863ea"} Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.377665 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-gg274" event={"ID":"28b80315-d885-48d0-b39a-2cb9620c5a71","Type":"ContainerStarted","Data":"39a9e778b3c1ae2579d4b5e6ceeecbbd00a408d9aaaf45e545888697206bcde2"} Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.379203 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" event={"ID":"8803937b-0d28-40bc-bdb9-12ea0b8d003c","Type":"ContainerStarted","Data":"def0380f8bc9f779f38e6fbb9252e6f286c486d2e6a4555e8a946ed6dea3f9be"} Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.385311 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dnk6l" event={"ID":"edeb5b7f-b7b2-4b21-a634-f9113bbe9487","Type":"ContainerStarted","Data":"c7ca1a2baef1665d54793027a207f27b3be67162436337e953e3723d30dabbff"} Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.395702 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:18 crc kubenswrapper[5114]: E1210 15:48:18.396840 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:18.896809024 +0000 UTC m=+124.617610211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.404459 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:18 crc kubenswrapper[5114]: E1210 15:48:18.408121 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:18.908108429 +0000 UTC m=+124.628909606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.409227 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-hxjhm" event={"ID":"a91bd0cb-9575-41db-ac31-b2eef142f4da","Type":"ContainerStarted","Data":"d289f3cae1e722da2d301f7defae8f3ec3611df11e549b23b37140d3699ddef8"} Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.414154 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8bnxz" event={"ID":"fbc81518-f2e1-452d-a57e-d52678bf4359","Type":"ContainerStarted","Data":"30527783d9463137ee27131005a6314f73c8d84681941b0a454eeac5dce0829e"} Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.443929 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.449584 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-lwbkt" Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.451704 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-j45nf" event={"ID":"2e757457-618f-4625-8008-3cb8989aa882","Type":"ContainerStarted","Data":"5d795275f513bd4cf6f35c5ca88cb64e293ed0eb00564b02061f5d593ddb23c5"} Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.469109 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wbclb" event={"ID":"cb1fe217-9de4-455e-80e1-dd01805e7935","Type":"ContainerStarted","Data":"d1dd794a365a37dbd1d94e648f2e5d7ffec37f3bec46ae5a4005180b6e46959b"} Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.473740 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jzw4f" event={"ID":"53ed4e9f-c2c3-47bc-a20b-42db16b6d57a","Type":"ContainerStarted","Data":"fb800d93f4245f6a35efa1a6b4ce350df53f4faf8035410ae0e1c34d790af61b"} Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.476100 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-57xp7" event={"ID":"b32a5174-fc1f-4e6e-8173-414921f6d86f","Type":"ContainerStarted","Data":"3cb7a959b99bb79686dc65ed8eca810b6cfbf64f0deceb3ed54887615c0ab704"} Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.476171 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-57xp7" event={"ID":"b32a5174-fc1f-4e6e-8173-414921f6d86f","Type":"ContainerStarted","Data":"c16c344dad833c41ace8b7d276cf0808345f6d5b2f95efb394b20cdd8db72d87"} Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.478772 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-cfjf4" event={"ID":"c19a2b06-50e9-4cb1-a04f-a495644f4cb1","Type":"ContainerStarted","Data":"1ed954709f3e2b8ea5790cb50a239b69a36f0ee7a260dde0842057827864b40d"} Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.478817 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-cfjf4" event={"ID":"c19a2b06-50e9-4cb1-a04f-a495644f4cb1","Type":"ContainerStarted","Data":"e98e4cb47186a8fb93be49ec7fb1e1b2d73e822822d99bec64d7454b551d615e"} Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.479160 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-cfjf4" Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.480565 5114 patch_prober.go:28] interesting pod/console-operator-67c89758df-cfjf4 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.480608 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-cfjf4" podUID="c19a2b06-50e9-4cb1-a04f-a495644f4cb1" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.483891 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gzfvl" event={"ID":"b73fffad-3220-4b21-9fd2-046191bf30ab","Type":"ContainerStarted","Data":"b4342ae6d45ec4f4552726cda9c058e3420d34a7777c23f3badffae944b2406d"} Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.483929 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gzfvl" event={"ID":"b73fffad-3220-4b21-9fd2-046191bf30ab","Type":"ContainerStarted","Data":"a0a65a9ab8f6515ffef8aaa54582cec2c7499ecf43ce62ae7a18230cc9acf293"} Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.484585 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gzfvl" Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.493129 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-59hqn" event={"ID":"b2b61e86-45b8-4491-8236-f056a381a5ab","Type":"ContainerStarted","Data":"735707f9292ae7670b2ac3e7e019ef7913eb38726b7ecc997efec2a0e7b3f783"} Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.500875 5114 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-gzfvl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.500946 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gzfvl" podUID="b73fffad-3220-4b21-9fd2-046191bf30ab" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.506115 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:18 crc kubenswrapper[5114]: W1210 15:48:18.506316 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00c50168_1c40_4c3d_9a03_c99c13223df8.slice/crio-69eadedeb37e0fdcec5794c962ca6a55032c5d5273c6536124330cb6067dfc9a WatchSource:0}: Error finding container 69eadedeb37e0fdcec5794c962ca6a55032c5d5273c6536124330cb6067dfc9a: Status 404 returned error can't find the container with id 69eadedeb37e0fdcec5794c962ca6a55032c5d5273c6536124330cb6067dfc9a Dec 10 15:48:18 crc kubenswrapper[5114]: E1210 15:48:18.507487 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:19.007470208 +0000 UTC m=+124.728271385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.588560 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-j6t46"] Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.610228 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:18 crc kubenswrapper[5114]: E1210 15:48:18.633130 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:19.133090069 +0000 UTC m=+124.853891236 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.684624 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423025-zw42q"] Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.684836 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-zqx8l" podStartSLOduration=104.684815375 podStartE2EDuration="1m44.684815375s" podCreationTimestamp="2025-12-10 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:18.679593243 +0000 UTC m=+124.400394420" watchObservedRunningTime="2025-12-10 15:48:18.684815375 +0000 UTC m=+124.405616552" Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.693354 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-nrpbd"] Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.714056 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:18 crc kubenswrapper[5114]: E1210 15:48:18.714394 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:19.214367311 +0000 UTC m=+124.935168488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.771089 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-2bl74"] Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.822656 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:18 crc kubenswrapper[5114]: E1210 15:48:18.823151 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:19.323135927 +0000 UTC m=+125.043937104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.896642 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.896907 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.905015 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-dsvk5"] Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.911643 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.925092 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:18 crc kubenswrapper[5114]: E1210 15:48:18.925727 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:19.425706077 +0000 UTC m=+125.146507254 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:18 crc kubenswrapper[5114]: W1210 15:48:18.952551 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79e5de70_9480_4091_8467_73e7b3d12424.slice/crio-25349e22d7ba1458461c8ef53c8b80553ecf8eea4f9c0188dba325b0af19573c WatchSource:0}: Error finding container 25349e22d7ba1458461c8ef53c8b80553ecf8eea4f9c0188dba325b0af19573c: Status 404 returned error can't find the container with id 25349e22d7ba1458461c8ef53c8b80553ecf8eea4f9c0188dba325b0af19573c Dec 10 15:48:18 crc kubenswrapper[5114]: W1210 15:48:18.956635 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81a2a5e5_1a13_4e0d_81a7_868716149070.slice/crio-98907b6eabb5aaa565ab6e4cd04a44fb15d21f173b09b3b4872395651d4941b0 WatchSource:0}: Error finding container 98907b6eabb5aaa565ab6e4cd04a44fb15d21f173b09b3b4872395651d4941b0: Status 404 returned error can't find the container with id 98907b6eabb5aaa565ab6e4cd04a44fb15d21f173b09b3b4872395651d4941b0 Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.967164 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp" Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.968116 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp" Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.969884 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-w2skq"] Dec 10 15:48:18 crc kubenswrapper[5114]: I1210 15:48:18.982327 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp" Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.017009 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kb5vt" podStartSLOduration=106.016989471 podStartE2EDuration="1m46.016989471s" podCreationTimestamp="2025-12-10 15:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:19.015996336 +0000 UTC m=+124.736797513" watchObservedRunningTime="2025-12-10 15:48:19.016989471 +0000 UTC m=+124.737790648" Dec 10 15:48:19 crc kubenswrapper[5114]: E1210 15:48:19.030696 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:19.530678706 +0000 UTC m=+125.251479873 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.031018 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.132242 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:19 crc kubenswrapper[5114]: E1210 15:48:19.136323 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:19.636300903 +0000 UTC m=+125.357102080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:19 crc kubenswrapper[5114]: W1210 15:48:19.179850 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod397fe639_b4d1_4a13_9327_50661b7f938a.slice/crio-70aba5d2528518a3261aea4d54eb192e252f1c511706d60b9e9d2ac494fdc5d2 WatchSource:0}: Error finding container 70aba5d2528518a3261aea4d54eb192e252f1c511706d60b9e9d2ac494fdc5d2: Status 404 returned error can't find the container with id 70aba5d2528518a3261aea4d54eb192e252f1c511706d60b9e9d2ac494fdc5d2 Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.234554 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:19 crc kubenswrapper[5114]: E1210 15:48:19.235188 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:19.735170819 +0000 UTC m=+125.455971996 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.239729 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dslgq"] Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.281565 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-vxnbb"] Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.285015 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-57xp7" Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.298594 5114 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-57xp7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 15:48:19 crc kubenswrapper[5114]: [-]has-synced failed: reason withheld Dec 10 15:48:19 crc kubenswrapper[5114]: [+]process-running ok Dec 10 15:48:19 crc kubenswrapper[5114]: healthz check failed Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.298904 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-57xp7" podUID="b32a5174-fc1f-4e6e-8173-414921f6d86f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.315524 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-wpjqd"] Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.335676 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:19 crc kubenswrapper[5114]: E1210 15:48:19.336211 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:19.836188569 +0000 UTC m=+125.556989756 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.432612 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-lskwt"] Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.438233 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:19 crc kubenswrapper[5114]: E1210 15:48:19.438668 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:19.938651976 +0000 UTC m=+125.659453163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.450351 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp" podStartSLOduration=105.450325321 podStartE2EDuration="1m45.450325321s" podCreationTimestamp="2025-12-10 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:19.426777587 +0000 UTC m=+125.147578784" watchObservedRunningTime="2025-12-10 15:48:19.450325321 +0000 UTC m=+125.171126498" Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.459494 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" podStartSLOduration=106.459478172 podStartE2EDuration="1m46.459478172s" podCreationTimestamp="2025-12-10 15:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:19.45782297 +0000 UTC m=+125.178624177" watchObservedRunningTime="2025-12-10 15:48:19.459478172 +0000 UTC m=+125.180279349" Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.517312 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-d6hj2"] Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.518230 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-vxnbb" event={"ID":"014c41e7-892d-4fbc-ad4b-f2cd257e83b3","Type":"ContainerStarted","Data":"116481e9b3fa6130ebfe735d33a1ef7088920ab0f344945946891282df5621f1"} Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.538010 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fv4w" event={"ID":"64fd8522-fc45-4417-8a06-59b34f001433","Type":"ContainerStarted","Data":"9102ef27198ffeb8cd6ff80efe18d22a110b6c4591d1fddfbd06e6ed4a8d155d"} Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.538985 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:19 crc kubenswrapper[5114]: E1210 15:48:19.539207 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:20.039174614 +0000 UTC m=+125.759975791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.539864 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:19 crc kubenswrapper[5114]: E1210 15:48:19.540243 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:20.040229241 +0000 UTC m=+125.761030418 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.543776 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-7nbcs" event={"ID":"d68dcc8d-b977-44e9-a63c-1cee775b50f2","Type":"ContainerStarted","Data":"b6891673874f3833eb2f81fc9ad63ba85af1729726240a2a7787db0948b5e856"} Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.544558 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-wpjqd" event={"ID":"1cce5f28-0219-4980-b7bd-26cbfcbe6435","Type":"ContainerStarted","Data":"229b4a1ee8d7dec1bbf9aece8b2b7f657274cc13ae39af55fff89463cce2d549"} Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.547613 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-w2skq" event={"ID":"397fe639-b4d1-4a13-9327-50661b7f938a","Type":"ContainerStarted","Data":"70aba5d2528518a3261aea4d54eb192e252f1c511706d60b9e9d2ac494fdc5d2"} Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.548658 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-dsvk5" event={"ID":"0336e7c6-4749-46b7-8709-0b03b511147d","Type":"ContainerStarted","Data":"de08f4b49a6d832c208499b697231af95f413e81e874e967da9fa9d15ef5f294"} Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.549879 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dnk6l" event={"ID":"edeb5b7f-b7b2-4b21-a634-f9113bbe9487","Type":"ContainerStarted","Data":"0c5095bc970d977034f3e25225ba81fa247467938d3531b9799e9ca7f6a94c2e"} Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.550751 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dslgq" event={"ID":"2bf2fd29-b4b2-4669-a9a8-99c061aa98c8","Type":"ContainerStarted","Data":"eac40ba9f08e0bea50db94d5975d623533d76f3afdbc6ea74dca40304445eabd"} Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.552213 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nrpbd" event={"ID":"81a2a5e5-1a13-4e0d-81a7-868716149070","Type":"ContainerStarted","Data":"98907b6eabb5aaa565ab6e4cd04a44fb15d21f173b09b3b4872395651d4941b0"} Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.553793 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-zw42q" event={"ID":"79e5de70-9480-4091-8467-73e7b3d12424","Type":"ContainerStarted","Data":"25349e22d7ba1458461c8ef53c8b80553ecf8eea4f9c0188dba325b0af19573c"} Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.557305 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-j6t46" event={"ID":"36137111-458a-4f99-bcbf-6606f80d8ee0","Type":"ContainerStarted","Data":"997641ccacb98d5eabe54c9addb672eb3c7c0804cb616f3c460748fa6fb51f59"} Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.562208 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-2bl74" event={"ID":"d6e3098c-67bd-4d09-b1f5-04309f94d5ac","Type":"ContainerStarted","Data":"ce567b31c9bc5cafac3b88e6212ffc04d195280fc9d05e781824eaa9f058d35d"} Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.563496 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-db5ff"] Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.584144 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8bnxz" event={"ID":"fbc81518-f2e1-452d-a57e-d52678bf4359","Type":"ContainerStarted","Data":"3dfd802c20c82b75029341918fb1a20a700b8be26f42839f7b83156eab933902"} Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.591025 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wbclb" event={"ID":"cb1fe217-9de4-455e-80e1-dd01805e7935","Type":"ContainerStarted","Data":"73651e0562ca08d5552ae224f2033f6f80ddb98ce07b09301b603726ccf000a0"} Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.596477 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jzw4f" event={"ID":"53ed4e9f-c2c3-47bc-a20b-42db16b6d57a","Type":"ContainerStarted","Data":"e9d0befce6b6d7f2391ce2a0f13f72612367f6f0212029da27a9dd65fe2dcca8"} Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.609589 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-59hqn" event={"ID":"b2b61e86-45b8-4491-8236-f056a381a5ab","Type":"ContainerStarted","Data":"186fd7f4a87ebe33c1a016b26524747856d1ddcf22469b28640709dd21f4d504"} Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.613019 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-v9phm"] Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.626564 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-7nbcs" Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.640470 5114 patch_prober.go:28] interesting pod/downloads-747b44746d-7nbcs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.640549 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-7nbcs" podUID="d68dcc8d-b977-44e9-a63c-1cee775b50f2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.643427 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:19 crc kubenswrapper[5114]: E1210 15:48:19.644644 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:20.144609966 +0000 UTC m=+125.865411283 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.646401 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:19 crc kubenswrapper[5114]: E1210 15:48:19.646892 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:20.146869113 +0000 UTC m=+125.867670290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.652877 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-m26t8" podStartSLOduration=106.652857714 podStartE2EDuration="1m46.652857714s" podCreationTimestamp="2025-12-10 15:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:19.651990743 +0000 UTC m=+125.372791930" watchObservedRunningTime="2025-12-10 15:48:19.652857714 +0000 UTC m=+125.373658891" Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.654254 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-wbl48" podStartSLOduration=106.654241419 podStartE2EDuration="1m46.654241419s" podCreationTimestamp="2025-12-10 15:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:19.62774498 +0000 UTC m=+125.348546167" watchObservedRunningTime="2025-12-10 15:48:19.654241419 +0000 UTC m=+125.375042596" Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.692890 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wx9kv" event={"ID":"00c50168-1c40-4c3d-9a03-c99c13223df8","Type":"ContainerStarted","Data":"69eadedeb37e0fdcec5794c962ca6a55032c5d5273c6536124330cb6067dfc9a"} Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.697899 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mr6mk" podStartSLOduration=106.697877461 podStartE2EDuration="1m46.697877461s" podCreationTimestamp="2025-12-10 15:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:19.673865605 +0000 UTC m=+125.394666802" watchObservedRunningTime="2025-12-10 15:48:19.697877461 +0000 UTC m=+125.418678638" Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.698615 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-55xzh" event={"ID":"4ff01055-87cd-4379-ba86-8778485be566","Type":"ContainerStarted","Data":"26ab4365d23e44d43ce8e063def011e6a231777f68cdaa667f129a81a2d63e65"} Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.698880 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-55xzh" Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.730859 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-wp2cx" event={"ID":"7fddbf1c-72d3-474b-b262-e852d4ea917b","Type":"ContainerStarted","Data":"32dc9bae351bcaec7fbcc6ee17c3aa0f9761d15a06b657128480d05e6e19c20e"} Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.735019 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-288ln" event={"ID":"6f642643-9482-4e17-b0f7-bd7bf530f5a1","Type":"ContainerStarted","Data":"db383cad60f214cc9e57e0ab8feecf2505e026888fdfaafde46429da32f329b3"} Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.755123 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-cfjf4" Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.755666 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.755724 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-llrrx" Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.756583 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-bdmmp" Dec 10 15:48:19 crc kubenswrapper[5114]: E1210 15:48:19.757488 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:20.257453735 +0000 UTC m=+125.978254912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.758750 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gzfvl" Dec 10 15:48:19 crc kubenswrapper[5114]: W1210 15:48:19.764398 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7d243eb_5e31_4635_803d_2408fe9f8575.slice/crio-8d7ed8ac600a8defdba6db94f80df4e27f5ec7a38884ab521bd7b33bbd48e196 WatchSource:0}: Error finding container 8d7ed8ac600a8defdba6db94f80df4e27f5ec7a38884ab521bd7b33bbd48e196: Status 404 returned error can't find the container with id 8d7ed8ac600a8defdba6db94f80df4e27f5ec7a38884ab521bd7b33bbd48e196 Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.777186 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-b4cz4" podStartSLOduration=106.777171173 podStartE2EDuration="1m46.777171173s" podCreationTimestamp="2025-12-10 15:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:19.775204813 +0000 UTC m=+125.496005990" watchObservedRunningTime="2025-12-10 15:48:19.777171173 +0000 UTC m=+125.497972340" Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.871172 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:19 crc kubenswrapper[5114]: E1210 15:48:19.872049 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:20.372022508 +0000 UTC m=+126.092823685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.937426 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8bnxz" podStartSLOduration=105.937404008 podStartE2EDuration="1m45.937404008s" podCreationTimestamp="2025-12-10 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:19.933960591 +0000 UTC m=+125.654761778" watchObservedRunningTime="2025-12-10 15:48:19.937404008 +0000 UTC m=+125.658205185" Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.977190 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fv4w" podStartSLOduration=106.977163612 podStartE2EDuration="1m46.977163612s" podCreationTimestamp="2025-12-10 15:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:19.973559491 +0000 UTC m=+125.694360688" watchObservedRunningTime="2025-12-10 15:48:19.977163612 +0000 UTC m=+125.697964789" Dec 10 15:48:19 crc kubenswrapper[5114]: I1210 15:48:19.990574 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:19 crc kubenswrapper[5114]: E1210 15:48:19.991381 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:20.49135468 +0000 UTC m=+126.212155857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.018334 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-55xzh" podStartSLOduration=7.018303401 podStartE2EDuration="7.018303401s" podCreationTimestamp="2025-12-10 15:48:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:20.014146696 +0000 UTC m=+125.734947873" watchObservedRunningTime="2025-12-10 15:48:20.018303401 +0000 UTC m=+125.739104578" Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.057159 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-7nbcs" podStartSLOduration=107.057130711 podStartE2EDuration="1m47.057130711s" podCreationTimestamp="2025-12-10 15:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:20.055743126 +0000 UTC m=+125.776544303" watchObservedRunningTime="2025-12-10 15:48:20.057130711 +0000 UTC m=+125.777931898" Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.092590 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:20 crc kubenswrapper[5114]: E1210 15:48:20.093067 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:20.593052008 +0000 UTC m=+126.313853185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.097987 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-gg274" podStartSLOduration=8.097944021 podStartE2EDuration="8.097944021s" podCreationTimestamp="2025-12-10 15:48:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:20.090714309 +0000 UTC m=+125.811515486" watchObservedRunningTime="2025-12-10 15:48:20.097944021 +0000 UTC m=+125.818745198" Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.184288 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-wbclb" podStartSLOduration=107.18424403 podStartE2EDuration="1m47.18424403s" podCreationTimestamp="2025-12-10 15:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:20.154319715 +0000 UTC m=+125.875120892" watchObservedRunningTime="2025-12-10 15:48:20.18424403 +0000 UTC m=+125.905045207" Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.196587 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:20 crc kubenswrapper[5114]: E1210 15:48:20.197324 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:20.69729371 +0000 UTC m=+126.418094887 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.248531 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z72bq" podStartSLOduration=107.248513063 podStartE2EDuration="1m47.248513063s" podCreationTimestamp="2025-12-10 15:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:20.199248719 +0000 UTC m=+125.920049906" watchObservedRunningTime="2025-12-10 15:48:20.248513063 +0000 UTC m=+125.969314240" Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.277107 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-hxjhm" podStartSLOduration=7.277080524 podStartE2EDuration="7.277080524s" podCreationTimestamp="2025-12-10 15:48:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:20.233848463 +0000 UTC m=+125.954649640" watchObservedRunningTime="2025-12-10 15:48:20.277080524 +0000 UTC m=+125.997881701" Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.298083 5114 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-57xp7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 15:48:20 crc kubenswrapper[5114]: [-]has-synced failed: reason withheld Dec 10 15:48:20 crc kubenswrapper[5114]: [+]process-running ok Dec 10 15:48:20 crc kubenswrapper[5114]: healthz check failed Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.298211 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-57xp7" podUID="b32a5174-fc1f-4e6e-8173-414921f6d86f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.299565 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-59hqn" podStartSLOduration=107.299548261 podStartE2EDuration="1m47.299548261s" podCreationTimestamp="2025-12-10 15:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:20.298468584 +0000 UTC m=+126.019269761" watchObservedRunningTime="2025-12-10 15:48:20.299548261 +0000 UTC m=+126.020349438" Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.304363 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:20 crc kubenswrapper[5114]: E1210 15:48:20.304720 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:20.804702641 +0000 UTC m=+126.525503818 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.350203 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-cfjf4" podStartSLOduration=107.35018964 podStartE2EDuration="1m47.35018964s" podCreationTimestamp="2025-12-10 15:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:20.348715203 +0000 UTC m=+126.069516370" watchObservedRunningTime="2025-12-10 15:48:20.35018964 +0000 UTC m=+126.070990807" Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.374609 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-57xp7" podStartSLOduration=107.374593126 podStartE2EDuration="1m47.374593126s" podCreationTimestamp="2025-12-10 15:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:20.374521054 +0000 UTC m=+126.095322231" watchObservedRunningTime="2025-12-10 15:48:20.374593126 +0000 UTC m=+126.095394303" Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.407161 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:20 crc kubenswrapper[5114]: E1210 15:48:20.407534 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:20.907518177 +0000 UTC m=+126.628319354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.481344 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x9hfx" podStartSLOduration=107.481329511 podStartE2EDuration="1m47.481329511s" podCreationTimestamp="2025-12-10 15:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:20.431550394 +0000 UTC m=+126.152351571" watchObservedRunningTime="2025-12-10 15:48:20.481329511 +0000 UTC m=+126.202130688" Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.510245 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:20 crc kubenswrapper[5114]: E1210 15:48:20.510532 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:21.010521258 +0000 UTC m=+126.731322435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.517468 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-wp2cx" podStartSLOduration=106.517455223 podStartE2EDuration="1m46.517455223s" podCreationTimestamp="2025-12-10 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:20.516410406 +0000 UTC m=+126.237211573" watchObservedRunningTime="2025-12-10 15:48:20.517455223 +0000 UTC m=+126.238256400" Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.566139 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gzfvl" podStartSLOduration=106.566123652 podStartE2EDuration="1m46.566123652s" podCreationTimestamp="2025-12-10 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:20.565071165 +0000 UTC m=+126.285872342" watchObservedRunningTime="2025-12-10 15:48:20.566123652 +0000 UTC m=+126.286924829" Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.612131 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:20 crc kubenswrapper[5114]: E1210 15:48:20.612418 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:21.11238848 +0000 UTC m=+126.833189677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.612887 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:20 crc kubenswrapper[5114]: E1210 15:48:20.613213 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:21.11320496 +0000 UTC m=+126.834006137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.714054 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:20 crc kubenswrapper[5114]: E1210 15:48:20.714349 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:21.214333403 +0000 UTC m=+126.935134580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.768365 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-288ln" event={"ID":"6f642643-9482-4e17-b0f7-bd7bf530f5a1","Type":"ContainerStarted","Data":"5fa0a0e3bf128ae0ab75fe559a067865497ff5ab099a6a11adb5f03c97736343"} Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.786968 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-tsm29" event={"ID":"c910b353-a094-46d1-9980-657a309d9050","Type":"ContainerStarted","Data":"1ebed2505d147d6a317cd71938a1570d38337917f097c30fa927e0a10f179d52"} Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.802622 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-db5ff" event={"ID":"95096727-f31b-4fd3-914a-152df463c991","Type":"ContainerStarted","Data":"a0f6f873114d17825977348aa3959f663786c4d05f1b74915101cdb1116549da"} Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.803076 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-288ln" podStartSLOduration=107.803065494 podStartE2EDuration="1m47.803065494s" podCreationTimestamp="2025-12-10 15:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:20.802700644 +0000 UTC m=+126.523501821" watchObservedRunningTime="2025-12-10 15:48:20.803065494 +0000 UTC m=+126.523866671" Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.815817 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:20 crc kubenswrapper[5114]: E1210 15:48:20.818284 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:21.318254747 +0000 UTC m=+127.039055924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.823634 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-wpjqd" event={"ID":"1cce5f28-0219-4980-b7bd-26cbfcbe6435","Type":"ContainerStarted","Data":"d2658ab04cd150979e9d0d56fde13192d480bf4cd98fd857e4bd00bedb87a7b6"} Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.823694 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-wpjqd" Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.844401 5114 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-wpjqd container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.844566 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-wpjqd" podUID="1cce5f28-0219-4980-b7bd-26cbfcbe6435" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.847018 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-tsm29" podStartSLOduration=107.847003643 podStartE2EDuration="1m47.847003643s" podCreationTimestamp="2025-12-10 15:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:20.84094217 +0000 UTC m=+126.561743357" watchObservedRunningTime="2025-12-10 15:48:20.847003643 +0000 UTC m=+126.567804810" Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.867297 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-w2skq" event={"ID":"397fe639-b4d1-4a13-9327-50661b7f938a","Type":"ContainerStarted","Data":"a2dac4762d7cba463be0aeb03059a2d98d51f3d7b3af4c8fbf0881b9bcbf995f"} Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.868848 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-w2skq" Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.878200 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-wpjqd" podStartSLOduration=106.87815918 podStartE2EDuration="1m46.87815918s" podCreationTimestamp="2025-12-10 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:20.875139203 +0000 UTC m=+126.595940400" watchObservedRunningTime="2025-12-10 15:48:20.87815918 +0000 UTC m=+126.598960357" Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.886671 5114 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-w2skq container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.886745 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-w2skq" podUID="397fe639-b4d1-4a13-9327-50661b7f938a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.899621 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-w2skq" podStartSLOduration=106.899599181 podStartE2EDuration="1m46.899599181s" podCreationTimestamp="2025-12-10 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:20.895262811 +0000 UTC m=+126.616063988" watchObservedRunningTime="2025-12-10 15:48:20.899599181 +0000 UTC m=+126.620400358" Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.915315 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-lwbkt" event={"ID":"b51af6b1-547c-4709-b115-93e1173bca33","Type":"ContainerStarted","Data":"f577922ec2bafd402678356f5df8915dd31c538c88e418822c8aed7d3f4a0383"} Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.917179 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:20 crc kubenswrapper[5114]: E1210 15:48:20.918840 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:21.418819156 +0000 UTC m=+127.139620333 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.944770 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jzw4f" event={"ID":"53ed4e9f-c2c3-47bc-a20b-42db16b6d57a","Type":"ContainerStarted","Data":"2c3682af56d94c816a749442dcb41e05d196971158e5693af9197fc82a2179b6"} Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.954721 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-v9phm" event={"ID":"7d00e7eb-f974-4213-90bd-aeef8bed3a8a","Type":"ContainerStarted","Data":"4d90c8d2e64b931579e8ed7ad6c704d9e94fd0735beb5c6a942f05d52c9a42d1"} Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.959993 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-d6hj2" event={"ID":"c7d243eb-5e31-4635-803d-2408fe9f8575","Type":"ContainerStarted","Data":"8d7ed8ac600a8defdba6db94f80df4e27f5ec7a38884ab521bd7b33bbd48e196"} Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.973440 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-j6t46" event={"ID":"36137111-458a-4f99-bcbf-6606f80d8ee0","Type":"ContainerStarted","Data":"be95653ec7997049eaff01a473bc23ae09fcaa1b03cf5ea1a5a2be966be12568"} Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.991983 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jzw4f" podStartSLOduration=107.991954783 podStartE2EDuration="1m47.991954783s" podCreationTimestamp="2025-12-10 15:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:20.991722167 +0000 UTC m=+126.712523364" watchObservedRunningTime="2025-12-10 15:48:20.991954783 +0000 UTC m=+126.712755970" Dec 10 15:48:20 crc kubenswrapper[5114]: I1210 15:48:20.992708 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nrpbd" podStartSLOduration=107.992699081 podStartE2EDuration="1m47.992699081s" podCreationTimestamp="2025-12-10 15:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:20.967650539 +0000 UTC m=+126.688451716" watchObservedRunningTime="2025-12-10 15:48:20.992699081 +0000 UTC m=+126.713500258" Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.020029 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:21 crc kubenswrapper[5114]: E1210 15:48:21.021200 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:21.52117693 +0000 UTC m=+127.241978107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.059018 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-lskwt" event={"ID":"26a9e3c3-f100-41fa-81ea-2790ebff1438","Type":"ContainerStarted","Data":"74954cc9053df31518e741035d13fb552b1591f3284509e662e237a8af7ce0a1"} Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.070005 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" event={"ID":"8803937b-0d28-40bc-bdb9-12ea0b8d003c","Type":"ContainerStarted","Data":"f745908e6b8dda6f3c2e1811af9aefcaebdda4c5576ef4e05c4dfb6f3f5c1c3f"} Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.071744 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.086948 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dnk6l" event={"ID":"edeb5b7f-b7b2-4b21-a634-f9113bbe9487","Type":"ContainerStarted","Data":"7996d0aca6b3a55c43c5ec934f98c2f526a38d92cb86e7420ae0829ab356e24b"} Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.087938 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-dnk6l" Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.123196 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:21 crc kubenswrapper[5114]: E1210 15:48:21.143285 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:21.643237402 +0000 UTC m=+127.364038579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.171540 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-zw42q" event={"ID":"79e5de70-9480-4091-8467-73e7b3d12424","Type":"ContainerStarted","Data":"a527fde15c9391ea5a45b1bbadb4500ceaee270c00036459d63619fddd93edf1"} Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.173106 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" podStartSLOduration=108.173089486 podStartE2EDuration="1m48.173089486s" podCreationTimestamp="2025-12-10 15:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:21.106195197 +0000 UTC m=+126.826996374" watchObservedRunningTime="2025-12-10 15:48:21.173089486 +0000 UTC m=+126.893890663" Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.173397 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-dnk6l" podStartSLOduration=9.173391903 podStartE2EDuration="9.173391903s" podCreationTimestamp="2025-12-10 15:48:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:21.172014658 +0000 UTC m=+126.892815835" watchObservedRunningTime="2025-12-10 15:48:21.173391903 +0000 UTC m=+126.894193080" Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.173628 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-2bl74" event={"ID":"d6e3098c-67bd-4d09-b1f5-04309f94d5ac","Type":"ContainerStarted","Data":"78b9e257a45211e09bd552db8bf072f5413f80a48a27904f0acb0236c1a39058"} Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.183865 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-2bl74" Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.191097 5114 patch_prober.go:28] interesting pod/downloads-747b44746d-7nbcs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.191165 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-7nbcs" podUID="d68dcc8d-b977-44e9-a63c-1cee775b50f2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.193237 5114 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-2bl74 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.193322 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-2bl74" podUID="d6e3098c-67bd-4d09-b1f5-04309f94d5ac" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.227985 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-zw42q" podStartSLOduration=107.227968741 podStartE2EDuration="1m47.227968741s" podCreationTimestamp="2025-12-10 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:21.225065418 +0000 UTC m=+126.945866595" watchObservedRunningTime="2025-12-10 15:48:21.227968741 +0000 UTC m=+126.948769918" Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.231862 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:21 crc kubenswrapper[5114]: E1210 15:48:21.237677 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:21.737658776 +0000 UTC m=+127.458459953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.293484 5114 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-57xp7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 15:48:21 crc kubenswrapper[5114]: [-]has-synced failed: reason withheld Dec 10 15:48:21 crc kubenswrapper[5114]: [+]process-running ok Dec 10 15:48:21 crc kubenswrapper[5114]: healthz check failed Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.293583 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-57xp7" podUID="b32a5174-fc1f-4e6e-8173-414921f6d86f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.317811 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-55xzh" Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.346477 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:21 crc kubenswrapper[5114]: E1210 15:48:21.346743 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:21.84673092 +0000 UTC m=+127.567532097 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.345921 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-2bl74" podStartSLOduration=107.345902339 podStartE2EDuration="1m47.345902339s" podCreationTimestamp="2025-12-10 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:21.262996946 +0000 UTC m=+126.983798123" watchObservedRunningTime="2025-12-10 15:48:21.345902339 +0000 UTC m=+127.066703516" Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.448831 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:21 crc kubenswrapper[5114]: E1210 15:48:21.449235 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:21.949223737 +0000 UTC m=+127.670024914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.550181 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:21 crc kubenswrapper[5114]: E1210 15:48:21.550377 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:22.05034504 +0000 UTC m=+127.771146227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.550467 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:21 crc kubenswrapper[5114]: E1210 15:48:21.550851 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:22.050825372 +0000 UTC m=+127.771626549 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.652078 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:21 crc kubenswrapper[5114]: E1210 15:48:21.652236 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:22.152206562 +0000 UTC m=+127.873007739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.652594 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:21 crc kubenswrapper[5114]: E1210 15:48:21.652939 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:22.15292901 +0000 UTC m=+127.873730187 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.660613 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-55xzh"] Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.753584 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:21 crc kubenswrapper[5114]: E1210 15:48:21.753744 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:22.253722335 +0000 UTC m=+127.974523512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.753960 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:21 crc kubenswrapper[5114]: E1210 15:48:21.754324 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:22.25431177 +0000 UTC m=+127.975112957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.800961 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.855438 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:21 crc kubenswrapper[5114]: E1210 15:48:21.856511 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:22.356479599 +0000 UTC m=+128.077280776 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.856801 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:21 crc kubenswrapper[5114]: E1210 15:48:21.857181 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:22.357165856 +0000 UTC m=+128.077967033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:21 crc kubenswrapper[5114]: I1210 15:48:21.957863 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:21 crc kubenswrapper[5114]: E1210 15:48:21.958367 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:22.458345981 +0000 UTC m=+128.179147158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.059538 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:22 crc kubenswrapper[5114]: E1210 15:48:22.059965 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:22.559944316 +0000 UTC m=+128.280745493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.160827 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:22 crc kubenswrapper[5114]: E1210 15:48:22.161026 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:22.660995237 +0000 UTC m=+128.381796424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.161380 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:22 crc kubenswrapper[5114]: E1210 15:48:22.161754 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:22.661738356 +0000 UTC m=+128.382539533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.181781 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-db5ff" event={"ID":"95096727-f31b-4fd3-914a-152df463c991","Type":"ContainerStarted","Data":"2345a7af38ac58906825aa7c443c4df54959b19a9b9d589b1b01fda9acf09606"} Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.181825 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-db5ff" event={"ID":"95096727-f31b-4fd3-914a-152df463c991","Type":"ContainerStarted","Data":"3be4cfe9ec685b8034d596248d9672fa73fccab4981d581e09a24932349ba3d9"} Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.185759 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-lwbkt" event={"ID":"b51af6b1-547c-4709-b115-93e1173bca33","Type":"ContainerStarted","Data":"9ac3632fdbb9e97878fb16268ea4ce54a297eb8e3dbaaf03b145616108fbb1dd"} Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.185797 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-lwbkt" event={"ID":"b51af6b1-547c-4709-b115-93e1173bca33","Type":"ContainerStarted","Data":"4068c85c0021f37d6d62bbbdf27b66b578697c8c17ae429374c7527604b08794"} Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.187238 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nrpbd" event={"ID":"81a2a5e5-1a13-4e0d-81a7-868716149070","Type":"ContainerStarted","Data":"bb746183bb2d65091037fbb91004fca3d1cb373b1e0a558a18e9f02876628200"} Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.189508 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-j45nf" event={"ID":"2e757457-618f-4625-8008-3cb8989aa882","Type":"ContainerStarted","Data":"9055b592dac96a7630cc6d124f501ac942de4faa856c2a0d25a3e2acf248c41c"} Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.191049 5114 generic.go:358] "Generic (PLEG): container finished" podID="7d00e7eb-f974-4213-90bd-aeef8bed3a8a" containerID="2a792a4f2c13472804b4fa3aa8d2456adffecbde0a2832df795af9a21e8b8862" exitCode=0 Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.191096 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-v9phm" event={"ID":"7d00e7eb-f974-4213-90bd-aeef8bed3a8a","Type":"ContainerDied","Data":"2a792a4f2c13472804b4fa3aa8d2456adffecbde0a2832df795af9a21e8b8862"} Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.193719 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-vxnbb" event={"ID":"014c41e7-892d-4fbc-ad4b-f2cd257e83b3","Type":"ContainerStarted","Data":"0eba116b2cc35fcf4554feecdd4c25c1d4cb8924f06c1dda4e93b0fd5dfd031f"} Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.193900 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-vxnbb" Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.195595 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-d6hj2" event={"ID":"c7d243eb-5e31-4635-803d-2408fe9f8575","Type":"ContainerStarted","Data":"3ea53b882c0d4708da03ff6debf88c4d4dc62d0642bf9df19aefd52d83cba5bd"} Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.199780 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-d6hj2" Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.200556 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-dsvk5" event={"ID":"0336e7c6-4749-46b7-8709-0b03b511147d","Type":"ContainerStarted","Data":"a7b77ef29bac396f45c385d6dbd06ce6e9147f727a4113f6850e3a7e3cf2294e"} Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.204301 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-j6t46" event={"ID":"36137111-458a-4f99-bcbf-6606f80d8ee0","Type":"ContainerStarted","Data":"6323b07414563d01c9b0d03955e99814b4d4274fb271824ce92a22042bab770d"} Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.206615 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wx9kv" event={"ID":"00c50168-1c40-4c3d-9a03-c99c13223df8","Type":"ContainerStarted","Data":"4d29a62cb66e64b469ea1c8cfa0a376abe568e2ca457b4a07c98d0d0dcc9dcca"} Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.206645 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wx9kv" event={"ID":"00c50168-1c40-4c3d-9a03-c99c13223df8","Type":"ContainerStarted","Data":"a8bea1cec5fec0fbe7355cd0d791831969a5cad86fcb1e5d67960045eff5bd5d"} Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.218803 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-lskwt" event={"ID":"26a9e3c3-f100-41fa-81ea-2790ebff1438","Type":"ContainerStarted","Data":"c29efc026c1929c0f96caebb7ebdfb2b0d2872c5dad779dc83391ff16af1e428"} Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.218852 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-lskwt" event={"ID":"26a9e3c3-f100-41fa-81ea-2790ebff1438","Type":"ContainerStarted","Data":"115ae22cdfad0bee7690bc4deecaa2ca46d549fe03bf2d1d20bf635d98890943"} Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.218969 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-lskwt" Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.222578 5114 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-wpjqd container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.222663 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-wpjqd" podUID="1cce5f28-0219-4980-b7bd-26cbfcbe6435" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.224158 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dslgq" event={"ID":"2bf2fd29-b4b2-4669-a9a8-99c061aa98c8","Type":"ContainerStarted","Data":"f9d2138f57f85f5856df7e911ade9fbe5420ee3718d5dc71daf40de843658f92"} Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.227131 5114 patch_prober.go:28] interesting pod/downloads-747b44746d-7nbcs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.227190 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-7nbcs" podUID="d68dcc8d-b977-44e9-a63c-1cee775b50f2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.238510 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-2bl74" Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.244592 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-w2skq" Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.266066 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:22 crc kubenswrapper[5114]: E1210 15:48:22.266176 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:22.766152062 +0000 UTC m=+128.486953239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.266614 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:22 crc kubenswrapper[5114]: E1210 15:48:22.274068 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:22.774053942 +0000 UTC m=+128.494855119 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.294701 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-vxnbb" podStartSLOduration=108.294679752 podStartE2EDuration="1m48.294679752s" podCreationTimestamp="2025-12-10 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:22.294479967 +0000 UTC m=+128.015281154" watchObservedRunningTime="2025-12-10 15:48:22.294679752 +0000 UTC m=+128.015480939" Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.295320 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-db5ff" podStartSLOduration=108.295313478 podStartE2EDuration="1m48.295313478s" podCreationTimestamp="2025-12-10 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:22.227444385 +0000 UTC m=+127.948245562" watchObservedRunningTime="2025-12-10 15:48:22.295313478 +0000 UTC m=+128.016114665" Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.300461 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-57xp7" Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.301773 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-57xp7" Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.311161 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-57xp7" Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.331900 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dslgq" podStartSLOduration=108.331884422 podStartE2EDuration="1m48.331884422s" podCreationTimestamp="2025-12-10 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:22.32906315 +0000 UTC m=+128.049864337" watchObservedRunningTime="2025-12-10 15:48:22.331884422 +0000 UTC m=+128.052685599" Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.369251 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:22 crc kubenswrapper[5114]: E1210 15:48:22.370492 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:22.870473046 +0000 UTC m=+128.591274233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.390189 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-j6t46" podStartSLOduration=109.390164893 podStartE2EDuration="1m49.390164893s" podCreationTimestamp="2025-12-10 15:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:22.389741012 +0000 UTC m=+128.110542209" watchObservedRunningTime="2025-12-10 15:48:22.390164893 +0000 UTC m=+128.110966080" Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.430315 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wx9kv" podStartSLOduration=109.430301766 podStartE2EDuration="1m49.430301766s" podCreationTimestamp="2025-12-10 15:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:22.428718936 +0000 UTC m=+128.149520113" watchObservedRunningTime="2025-12-10 15:48:22.430301766 +0000 UTC m=+128.151102943" Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.458874 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-lskwt" podStartSLOduration=108.458856797 podStartE2EDuration="1m48.458856797s" podCreationTimestamp="2025-12-10 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:22.458292563 +0000 UTC m=+128.179093740" watchObservedRunningTime="2025-12-10 15:48:22.458856797 +0000 UTC m=+128.179657964" Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.477393 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:22 crc kubenswrapper[5114]: E1210 15:48:22.477774 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:22.977760835 +0000 UTC m=+128.698562012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.488738 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-d6hj2" podStartSLOduration=109.488718061 podStartE2EDuration="1m49.488718061s" podCreationTimestamp="2025-12-10 15:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:22.486634279 +0000 UTC m=+128.207435466" watchObservedRunningTime="2025-12-10 15:48:22.488718061 +0000 UTC m=+128.209519238" Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.517432 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-lwbkt" podStartSLOduration=109.517408786 podStartE2EDuration="1m49.517408786s" podCreationTimestamp="2025-12-10 15:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:22.516869512 +0000 UTC m=+128.237670689" watchObservedRunningTime="2025-12-10 15:48:22.517408786 +0000 UTC m=+128.238209963" Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.556163 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-d6hj2" Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.578615 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:22 crc kubenswrapper[5114]: E1210 15:48:22.578726 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:23.078702082 +0000 UTC m=+128.799503259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.579061 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:22 crc kubenswrapper[5114]: E1210 15:48:22.579376 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:23.079367069 +0000 UTC m=+128.800168246 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.622970 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-dsvk5" podStartSLOduration=109.622953589 podStartE2EDuration="1m49.622953589s" podCreationTimestamp="2025-12-10 15:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:22.597788524 +0000 UTC m=+128.318589711" watchObservedRunningTime="2025-12-10 15:48:22.622953589 +0000 UTC m=+128.343754766" Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.624179 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.661699 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.661856 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.673705 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.680885 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:22 crc kubenswrapper[5114]: E1210 15:48:22.681399 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:23.181380354 +0000 UTC m=+128.902181531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.681489 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.739663 5114 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-vxnbb container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 10 15:48:22 crc kubenswrapper[5114]: [+]log ok Dec 10 15:48:22 crc kubenswrapper[5114]: [+]poststarthook/generic-apiserver-start-informers ok Dec 10 15:48:22 crc kubenswrapper[5114]: [-]poststarthook/max-in-flight-filter failed: reason withheld Dec 10 15:48:22 crc kubenswrapper[5114]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 10 15:48:22 crc kubenswrapper[5114]: healthz check failed Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.739737 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-vxnbb" podUID="014c41e7-892d-4fbc-ad4b-f2cd257e83b3" containerName="packageserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.783135 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.783320 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94dee5e7-4d12-43db-83e6-44b77c7ae2ce-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"94dee5e7-4d12-43db-83e6-44b77c7ae2ce\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.783354 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94dee5e7-4d12-43db-83e6-44b77c7ae2ce-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"94dee5e7-4d12-43db-83e6-44b77c7ae2ce\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 10 15:48:22 crc kubenswrapper[5114]: E1210 15:48:22.783707 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:23.283692337 +0000 UTC m=+129.004493514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.884850 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:22 crc kubenswrapper[5114]: E1210 15:48:22.885105 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:23.385073647 +0000 UTC m=+129.105874824 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.885374 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94dee5e7-4d12-43db-83e6-44b77c7ae2ce-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"94dee5e7-4d12-43db-83e6-44b77c7ae2ce\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.885410 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94dee5e7-4d12-43db-83e6-44b77c7ae2ce-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"94dee5e7-4d12-43db-83e6-44b77c7ae2ce\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.885521 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:22 crc kubenswrapper[5114]: E1210 15:48:22.885803 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:23.385796255 +0000 UTC m=+129.106597432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.886256 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94dee5e7-4d12-43db-83e6-44b77c7ae2ce-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"94dee5e7-4d12-43db-83e6-44b77c7ae2ce\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.923980 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94dee5e7-4d12-43db-83e6-44b77c7ae2ce-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"94dee5e7-4d12-43db-83e6-44b77c7ae2ce\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.940677 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44600: no serving certificate available for the kubelet" Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.986980 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:22 crc kubenswrapper[5114]: E1210 15:48:22.987171 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:23.487145324 +0000 UTC m=+129.207946501 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:22 crc kubenswrapper[5114]: I1210 15:48:22.987403 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:22 crc kubenswrapper[5114]: E1210 15:48:22.987759 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:23.487745279 +0000 UTC m=+129.208546456 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.009006 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.033867 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44614: no serving certificate available for the kubelet" Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.088798 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:23 crc kubenswrapper[5114]: E1210 15:48:23.089023 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:23.588985885 +0000 UTC m=+129.309787062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.089390 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:23 crc kubenswrapper[5114]: E1210 15:48:23.089775 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:23.589757925 +0000 UTC m=+129.310559102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.138311 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dvt8r"] Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.139403 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44624: no serving certificate available for the kubelet" Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.190730 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:23 crc kubenswrapper[5114]: E1210 15:48:23.190894 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:23.690862597 +0000 UTC m=+129.411663774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.191504 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:23 crc kubenswrapper[5114]: E1210 15:48:23.191850 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:23.691839852 +0000 UTC m=+129.412641029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.234358 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44636: no serving certificate available for the kubelet" Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.245757 5114 generic.go:358] "Generic (PLEG): container finished" podID="79e5de70-9480-4091-8467-73e7b3d12424" containerID="a527fde15c9391ea5a45b1bbadb4500ceaee270c00036459d63619fddd93edf1" exitCode=0 Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.292387 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:23 crc kubenswrapper[5114]: E1210 15:48:23.292870 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:23.792851742 +0000 UTC m=+129.513652919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.337150 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44648: no serving certificate available for the kubelet" Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.394061 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:23 crc kubenswrapper[5114]: E1210 15:48:23.394415 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:23.894399556 +0000 UTC m=+129.615200733 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.433873 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44658: no serving certificate available for the kubelet" Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.453124 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44664: no serving certificate available for the kubelet" Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.496324 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:23 crc kubenswrapper[5114]: E1210 15:48:23.496893 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:23.996874623 +0000 UTC m=+129.717675800 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.599007 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:23 crc kubenswrapper[5114]: E1210 15:48:23.599472 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:24.099451413 +0000 UTC m=+129.820252590 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.617848 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44674: no serving certificate available for the kubelet" Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.701424 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:23 crc kubenswrapper[5114]: E1210 15:48:23.701534 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:24.20151137 +0000 UTC m=+129.922312557 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.702005 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:23 crc kubenswrapper[5114]: E1210 15:48:23.702496 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:24.202478324 +0000 UTC m=+129.923279541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.803282 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:23 crc kubenswrapper[5114]: E1210 15:48:23.803830 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:24.303790332 +0000 UTC m=+130.024591509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.856246 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-v9phm" event={"ID":"7d00e7eb-f974-4213-90bd-aeef8bed3a8a","Type":"ContainerStarted","Data":"22e283dd3b82264a9ce99f5aeb102f8b4a12f675840dc1f30e352920834ece32"} Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.856303 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-zw42q" event={"ID":"79e5de70-9480-4091-8467-73e7b3d12424","Type":"ContainerDied","Data":"a527fde15c9391ea5a45b1bbadb4500ceaee270c00036459d63619fddd93edf1"} Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.856321 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dvt8r"] Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.856335 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.856347 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lfhws"] Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.856753 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dvt8r" Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.859808 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.868551 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-v9phm" Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.868585 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lfhws"] Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.868601 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-clgwg"] Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.870119 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-55xzh" podUID="4ff01055-87cd-4379-ba86-8778485be566" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://26ab4365d23e44d43ce8e063def011e6a231777f68cdaa667f129a81a2d63e65" gracePeriod=30 Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.879907 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lfhws" Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.885340 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.889714 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-clgwg"] Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.889758 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gn7sf"] Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.897122 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-clgwg" Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.906169 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frqtd\" (UniqueName: \"kubernetes.io/projected/6568bc5a-ae55-48c0-b351-c5fbfafc3a6e-kube-api-access-frqtd\") pod \"community-operators-clgwg\" (UID: \"6568bc5a-ae55-48c0-b351-c5fbfafc3a6e\") " pod="openshift-marketplace/community-operators-clgwg" Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.906232 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.906305 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc6eba38-9248-4153-acdb-87d7acc29df0-catalog-content\") pod \"certified-operators-lfhws\" (UID: \"bc6eba38-9248-4153-acdb-87d7acc29df0\") " pod="openshift-marketplace/certified-operators-lfhws" Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.906577 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e-utilities\") pod \"community-operators-dvt8r\" (UID: \"44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e\") " pod="openshift-marketplace/community-operators-dvt8r" Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.907569 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt2dk\" (UniqueName: \"kubernetes.io/projected/bc6eba38-9248-4153-acdb-87d7acc29df0-kube-api-access-nt2dk\") pod \"certified-operators-lfhws\" (UID: \"bc6eba38-9248-4153-acdb-87d7acc29df0\") " pod="openshift-marketplace/certified-operators-lfhws" Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.907843 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkdtt\" (UniqueName: \"kubernetes.io/projected/44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e-kube-api-access-zkdtt\") pod \"community-operators-dvt8r\" (UID: \"44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e\") " pod="openshift-marketplace/community-operators-dvt8r" Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.908078 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6568bc5a-ae55-48c0-b351-c5fbfafc3a6e-utilities\") pod \"community-operators-clgwg\" (UID: \"6568bc5a-ae55-48c0-b351-c5fbfafc3a6e\") " pod="openshift-marketplace/community-operators-clgwg" Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.908325 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e-catalog-content\") pod \"community-operators-dvt8r\" (UID: \"44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e\") " pod="openshift-marketplace/community-operators-dvt8r" Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.908445 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc6eba38-9248-4153-acdb-87d7acc29df0-utilities\") pod \"certified-operators-lfhws\" (UID: \"bc6eba38-9248-4153-acdb-87d7acc29df0\") " pod="openshift-marketplace/certified-operators-lfhws" Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.908748 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6568bc5a-ae55-48c0-b351-c5fbfafc3a6e-catalog-content\") pod \"community-operators-clgwg\" (UID: \"6568bc5a-ae55-48c0-b351-c5fbfafc3a6e\") " pod="openshift-marketplace/community-operators-clgwg" Dec 10 15:48:23 crc kubenswrapper[5114]: E1210 15:48:23.911247 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:24.411231084 +0000 UTC m=+130.132032261 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.921495 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-vxnbb" Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.928848 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gn7sf"] Dec 10 15:48:23 crc kubenswrapper[5114]: I1210 15:48:23.925307 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gn7sf" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.010302 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-v9phm" podStartSLOduration=111.010284365 podStartE2EDuration="1m51.010284365s" podCreationTimestamp="2025-12-10 15:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:23.996816595 +0000 UTC m=+129.717617772" watchObservedRunningTime="2025-12-10 15:48:24.010284365 +0000 UTC m=+129.731085542" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.023215 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.023495 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/949ddda2-62c3-484c-9034-3b447502cf4d-utilities\") pod \"certified-operators-gn7sf\" (UID: \"949ddda2-62c3-484c-9034-3b447502cf4d\") " pod="openshift-marketplace/certified-operators-gn7sf" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.023531 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e-utilities\") pod \"community-operators-dvt8r\" (UID: \"44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e\") " pod="openshift-marketplace/community-operators-dvt8r" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.023555 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh747\" (UniqueName: \"kubernetes.io/projected/949ddda2-62c3-484c-9034-3b447502cf4d-kube-api-access-bh747\") pod \"certified-operators-gn7sf\" (UID: \"949ddda2-62c3-484c-9034-3b447502cf4d\") " pod="openshift-marketplace/certified-operators-gn7sf" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.023574 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nt2dk\" (UniqueName: \"kubernetes.io/projected/bc6eba38-9248-4153-acdb-87d7acc29df0-kube-api-access-nt2dk\") pod \"certified-operators-lfhws\" (UID: \"bc6eba38-9248-4153-acdb-87d7acc29df0\") " pod="openshift-marketplace/certified-operators-lfhws" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.023611 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zkdtt\" (UniqueName: \"kubernetes.io/projected/44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e-kube-api-access-zkdtt\") pod \"community-operators-dvt8r\" (UID: \"44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e\") " pod="openshift-marketplace/community-operators-dvt8r" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.023644 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/949ddda2-62c3-484c-9034-3b447502cf4d-catalog-content\") pod \"certified-operators-gn7sf\" (UID: \"949ddda2-62c3-484c-9034-3b447502cf4d\") " pod="openshift-marketplace/certified-operators-gn7sf" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.023673 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6568bc5a-ae55-48c0-b351-c5fbfafc3a6e-utilities\") pod \"community-operators-clgwg\" (UID: \"6568bc5a-ae55-48c0-b351-c5fbfafc3a6e\") " pod="openshift-marketplace/community-operators-clgwg" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.023725 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e-catalog-content\") pod \"community-operators-dvt8r\" (UID: \"44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e\") " pod="openshift-marketplace/community-operators-dvt8r" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.023764 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc6eba38-9248-4153-acdb-87d7acc29df0-utilities\") pod \"certified-operators-lfhws\" (UID: \"bc6eba38-9248-4153-acdb-87d7acc29df0\") " pod="openshift-marketplace/certified-operators-lfhws" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.023792 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6568bc5a-ae55-48c0-b351-c5fbfafc3a6e-catalog-content\") pod \"community-operators-clgwg\" (UID: \"6568bc5a-ae55-48c0-b351-c5fbfafc3a6e\") " pod="openshift-marketplace/community-operators-clgwg" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.023849 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-frqtd\" (UniqueName: \"kubernetes.io/projected/6568bc5a-ae55-48c0-b351-c5fbfafc3a6e-kube-api-access-frqtd\") pod \"community-operators-clgwg\" (UID: \"6568bc5a-ae55-48c0-b351-c5fbfafc3a6e\") " pod="openshift-marketplace/community-operators-clgwg" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.023889 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc6eba38-9248-4153-acdb-87d7acc29df0-catalog-content\") pod \"certified-operators-lfhws\" (UID: \"bc6eba38-9248-4153-acdb-87d7acc29df0\") " pod="openshift-marketplace/certified-operators-lfhws" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.024458 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc6eba38-9248-4153-acdb-87d7acc29df0-catalog-content\") pod \"certified-operators-lfhws\" (UID: \"bc6eba38-9248-4153-acdb-87d7acc29df0\") " pod="openshift-marketplace/certified-operators-lfhws" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.024465 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6568bc5a-ae55-48c0-b351-c5fbfafc3a6e-utilities\") pod \"community-operators-clgwg\" (UID: \"6568bc5a-ae55-48c0-b351-c5fbfafc3a6e\") " pod="openshift-marketplace/community-operators-clgwg" Dec 10 15:48:24 crc kubenswrapper[5114]: E1210 15:48:24.024538 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:24.524522305 +0000 UTC m=+130.245323482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.024806 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e-utilities\") pod \"community-operators-dvt8r\" (UID: \"44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e\") " pod="openshift-marketplace/community-operators-dvt8r" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.025039 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e-catalog-content\") pod \"community-operators-dvt8r\" (UID: \"44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e\") " pod="openshift-marketplace/community-operators-dvt8r" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.025058 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc6eba38-9248-4153-acdb-87d7acc29df0-utilities\") pod \"certified-operators-lfhws\" (UID: \"bc6eba38-9248-4153-acdb-87d7acc29df0\") " pod="openshift-marketplace/certified-operators-lfhws" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.025243 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6568bc5a-ae55-48c0-b351-c5fbfafc3a6e-catalog-content\") pod \"community-operators-clgwg\" (UID: \"6568bc5a-ae55-48c0-b351-c5fbfafc3a6e\") " pod="openshift-marketplace/community-operators-clgwg" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.052601 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkdtt\" (UniqueName: \"kubernetes.io/projected/44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e-kube-api-access-zkdtt\") pod \"community-operators-dvt8r\" (UID: \"44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e\") " pod="openshift-marketplace/community-operators-dvt8r" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.055715 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt2dk\" (UniqueName: \"kubernetes.io/projected/bc6eba38-9248-4153-acdb-87d7acc29df0-kube-api-access-nt2dk\") pod \"certified-operators-lfhws\" (UID: \"bc6eba38-9248-4153-acdb-87d7acc29df0\") " pod="openshift-marketplace/certified-operators-lfhws" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.056017 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-frqtd\" (UniqueName: \"kubernetes.io/projected/6568bc5a-ae55-48c0-b351-c5fbfafc3a6e-kube-api-access-frqtd\") pod \"community-operators-clgwg\" (UID: \"6568bc5a-ae55-48c0-b351-c5fbfafc3a6e\") " pod="openshift-marketplace/community-operators-clgwg" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.125125 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/949ddda2-62c3-484c-9034-3b447502cf4d-catalog-content\") pod \"certified-operators-gn7sf\" (UID: \"949ddda2-62c3-484c-9034-3b447502cf4d\") " pod="openshift-marketplace/certified-operators-gn7sf" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.125528 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.125596 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/949ddda2-62c3-484c-9034-3b447502cf4d-utilities\") pod \"certified-operators-gn7sf\" (UID: \"949ddda2-62c3-484c-9034-3b447502cf4d\") " pod="openshift-marketplace/certified-operators-gn7sf" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.125652 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bh747\" (UniqueName: \"kubernetes.io/projected/949ddda2-62c3-484c-9034-3b447502cf4d-kube-api-access-bh747\") pod \"certified-operators-gn7sf\" (UID: \"949ddda2-62c3-484c-9034-3b447502cf4d\") " pod="openshift-marketplace/certified-operators-gn7sf" Dec 10 15:48:24 crc kubenswrapper[5114]: E1210 15:48:24.126763 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:24.626750196 +0000 UTC m=+130.347551363 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.126792 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/949ddda2-62c3-484c-9034-3b447502cf4d-catalog-content\") pod \"certified-operators-gn7sf\" (UID: \"949ddda2-62c3-484c-9034-3b447502cf4d\") " pod="openshift-marketplace/certified-operators-gn7sf" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.127014 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/949ddda2-62c3-484c-9034-3b447502cf4d-utilities\") pod \"certified-operators-gn7sf\" (UID: \"949ddda2-62c3-484c-9034-3b447502cf4d\") " pod="openshift-marketplace/certified-operators-gn7sf" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.146254 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh747\" (UniqueName: \"kubernetes.io/projected/949ddda2-62c3-484c-9034-3b447502cf4d-kube-api-access-bh747\") pod \"certified-operators-gn7sf\" (UID: \"949ddda2-62c3-484c-9034-3b447502cf4d\") " pod="openshift-marketplace/certified-operators-gn7sf" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.179295 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dvt8r" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.229576 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-clgwg" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.229625 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:24 crc kubenswrapper[5114]: E1210 15:48:24.229758 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:24.729737526 +0000 UTC m=+130.450538693 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.229820 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:24 crc kubenswrapper[5114]: E1210 15:48:24.230130 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:24.730123245 +0000 UTC m=+130.450924412 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.246312 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lfhws" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.260418 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gn7sf" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.280890 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"94dee5e7-4d12-43db-83e6-44b77c7ae2ce","Type":"ContainerStarted","Data":"0019f0f9ccb9d6575223db6bd1b80823c26d2b87b81de862542d666c4be8dc5e"} Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.298987 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44688: no serving certificate available for the kubelet" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.331046 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:24 crc kubenswrapper[5114]: E1210 15:48:24.331952 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:24.831921006 +0000 UTC m=+130.552722183 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.405756 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dvt8r"] Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.432467 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:24 crc kubenswrapper[5114]: E1210 15:48:24.432845 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:24.932826253 +0000 UTC m=+130.653627500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.534086 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:24 crc kubenswrapper[5114]: E1210 15:48:24.534319 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:25.034258984 +0000 UTC m=+130.755060161 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.534642 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:24 crc kubenswrapper[5114]: E1210 15:48:24.534931 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:25.034923961 +0000 UTC m=+130.755725138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.625746 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-zw42q" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.635246 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:24 crc kubenswrapper[5114]: E1210 15:48:24.635370 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:25.135349106 +0000 UTC m=+130.856150283 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.636181 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:24 crc kubenswrapper[5114]: E1210 15:48:24.636700 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:25.13668516 +0000 UTC m=+130.857486337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.692226 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-clgwg"] Dec 10 15:48:24 crc kubenswrapper[5114]: W1210 15:48:24.712157 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6568bc5a_ae55_48c0_b351_c5fbfafc3a6e.slice/crio-294a415a4a4962292c06366d514e97e0e19a65fc335a73732416e1f29889f00f WatchSource:0}: Error finding container 294a415a4a4962292c06366d514e97e0e19a65fc335a73732416e1f29889f00f: Status 404 returned error can't find the container with id 294a415a4a4962292c06366d514e97e0e19a65fc335a73732416e1f29889f00f Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.737305 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.737386 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79e5de70-9480-4091-8467-73e7b3d12424-config-volume\") pod \"79e5de70-9480-4091-8467-73e7b3d12424\" (UID: \"79e5de70-9480-4091-8467-73e7b3d12424\") " Dec 10 15:48:24 crc kubenswrapper[5114]: E1210 15:48:24.737455 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:25.237430283 +0000 UTC m=+130.958231460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.737558 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/79e5de70-9480-4091-8467-73e7b3d12424-secret-volume\") pod \"79e5de70-9480-4091-8467-73e7b3d12424\" (UID: \"79e5de70-9480-4091-8467-73e7b3d12424\") " Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.737697 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwb7v\" (UniqueName: \"kubernetes.io/projected/79e5de70-9480-4091-8467-73e7b3d12424-kube-api-access-nwb7v\") pod \"79e5de70-9480-4091-8467-73e7b3d12424\" (UID: \"79e5de70-9480-4091-8467-73e7b3d12424\") " Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.737853 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79e5de70-9480-4091-8467-73e7b3d12424-config-volume" (OuterVolumeSpecName: "config-volume") pod "79e5de70-9480-4091-8467-73e7b3d12424" (UID: "79e5de70-9480-4091-8467-73e7b3d12424"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.737936 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.738218 5114 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79e5de70-9480-4091-8467-73e7b3d12424-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:24 crc kubenswrapper[5114]: E1210 15:48:24.738243 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:25.238228884 +0000 UTC m=+130.959030061 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.743302 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e5de70-9480-4091-8467-73e7b3d12424-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "79e5de70-9480-4091-8467-73e7b3d12424" (UID: "79e5de70-9480-4091-8467-73e7b3d12424"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.745484 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e5de70-9480-4091-8467-73e7b3d12424-kube-api-access-nwb7v" (OuterVolumeSpecName: "kube-api-access-nwb7v") pod "79e5de70-9480-4091-8467-73e7b3d12424" (UID: "79e5de70-9480-4091-8467-73e7b3d12424"). InnerVolumeSpecName "kube-api-access-nwb7v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.799615 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gn7sf"] Dec 10 15:48:24 crc kubenswrapper[5114]: W1210 15:48:24.805484 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod949ddda2_62c3_484c_9034_3b447502cf4d.slice/crio-757b4dbf81394fffa875318b552260c6723fb5afa0fa7e83b4e93757491a6053 WatchSource:0}: Error finding container 757b4dbf81394fffa875318b552260c6723fb5afa0fa7e83b4e93757491a6053: Status 404 returned error can't find the container with id 757b4dbf81394fffa875318b552260c6723fb5afa0fa7e83b4e93757491a6053 Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.818908 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lfhws"] Dec 10 15:48:24 crc kubenswrapper[5114]: W1210 15:48:24.821929 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc6eba38_9248_4153_acdb_87d7acc29df0.slice/crio-a81b9af3f58df1d5eac84883fe4f247acdcb4959fab2b89e0bf720d5b42caf2d WatchSource:0}: Error finding container a81b9af3f58df1d5eac84883fe4f247acdcb4959fab2b89e0bf720d5b42caf2d: Status 404 returned error can't find the container with id a81b9af3f58df1d5eac84883fe4f247acdcb4959fab2b89e0bf720d5b42caf2d Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.839599 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:24 crc kubenswrapper[5114]: E1210 15:48:24.839847 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:25.339813968 +0000 UTC m=+131.060615155 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.839982 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.840408 5114 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/79e5de70-9480-4091-8467-73e7b3d12424-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.840433 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nwb7v\" (UniqueName: \"kubernetes.io/projected/79e5de70-9480-4091-8467-73e7b3d12424-kube-api-access-nwb7v\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:24 crc kubenswrapper[5114]: E1210 15:48:24.840710 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:25.340700241 +0000 UTC m=+131.061501418 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.941794 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:24 crc kubenswrapper[5114]: E1210 15:48:24.941914 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:25.441894636 +0000 UTC m=+131.162695813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:24 crc kubenswrapper[5114]: I1210 15:48:24.942244 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:24 crc kubenswrapper[5114]: E1210 15:48:24.942558 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:25.442548482 +0000 UTC m=+131.163349659 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:24 crc kubenswrapper[5114]: E1210 15:48:24.971513 5114 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b6e28a6_b1a9_4942_8457_e54258393016.slice/crio-conmon-1b1a8fa0e80fd36fe13e3dd77a7af89a418a45139b9e394260c5c24cb90fde7c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b6e28a6_b1a9_4942_8457_e54258393016.slice/crio-1b1a8fa0e80fd36fe13e3dd77a7af89a418a45139b9e394260c5c24cb90fde7c.scope\": RecentStats: unable to find data in memory cache]" Dec 10 15:48:25 crc kubenswrapper[5114]: I1210 15:48:25.043834 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:25 crc kubenswrapper[5114]: E1210 15:48:25.044412 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:25.544391543 +0000 UTC m=+131.265192720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:25 crc kubenswrapper[5114]: I1210 15:48:25.122029 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tkn7z"] Dec 10 15:48:25 crc kubenswrapper[5114]: I1210 15:48:25.122560 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="79e5de70-9480-4091-8467-73e7b3d12424" containerName="collect-profiles" Dec 10 15:48:25 crc kubenswrapper[5114]: I1210 15:48:25.122576 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e5de70-9480-4091-8467-73e7b3d12424" containerName="collect-profiles" Dec 10 15:48:25 crc kubenswrapper[5114]: I1210 15:48:25.122648 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="79e5de70-9480-4091-8467-73e7b3d12424" containerName="collect-profiles" Dec 10 15:48:25 crc kubenswrapper[5114]: I1210 15:48:25.145177 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:25 crc kubenswrapper[5114]: E1210 15:48:25.145671 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:25.64565598 +0000 UTC m=+131.366457157 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:25 crc kubenswrapper[5114]: I1210 15:48:25.246667 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:25 crc kubenswrapper[5114]: E1210 15:48:25.247179 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:25.747157982 +0000 UTC m=+131.467959169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:25 crc kubenswrapper[5114]: I1210 15:48:25.348873 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:25 crc kubenswrapper[5114]: E1210 15:48:25.349182 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:25.849166768 +0000 UTC m=+131.569967945 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:25 crc kubenswrapper[5114]: I1210 15:48:25.450025 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:25 crc kubenswrapper[5114]: E1210 15:48:25.450235 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:25.950205429 +0000 UTC m=+131.671006606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:25 crc kubenswrapper[5114]: I1210 15:48:25.450680 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:25 crc kubenswrapper[5114]: I1210 15:48:25.450741 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:48:25 crc kubenswrapper[5114]: I1210 15:48:25.450834 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:48:25 crc kubenswrapper[5114]: E1210 15:48:25.451123 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:25.951102061 +0000 UTC m=+131.671903258 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:25 crc kubenswrapper[5114]: I1210 15:48:25.455736 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:48:25 crc kubenswrapper[5114]: I1210 15:48:25.551825 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:25 crc kubenswrapper[5114]: E1210 15:48:25.551916 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:26.051897536 +0000 UTC m=+131.772698723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:25 crc kubenswrapper[5114]: I1210 15:48:25.552116 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48d8f4a9-0b40-486c-ac70-597d1fab05c1-metrics-certs\") pod \"network-metrics-daemon-gjs2g\" (UID: \"48d8f4a9-0b40-486c-ac70-597d1fab05c1\") " pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:48:25 crc kubenswrapper[5114]: I1210 15:48:25.552141 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:25 crc kubenswrapper[5114]: I1210 15:48:25.552207 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:48:25 crc kubenswrapper[5114]: I1210 15:48:25.552246 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:48:25 crc kubenswrapper[5114]: E1210 15:48:25.552593 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:26.052575673 +0000 UTC m=+131.773376920 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:25 crc kubenswrapper[5114]: I1210 15:48:25.555345 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48d8f4a9-0b40-486c-ac70-597d1fab05c1-metrics-certs\") pod \"network-metrics-daemon-gjs2g\" (UID: \"48d8f4a9-0b40-486c-ac70-597d1fab05c1\") " pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:48:25 crc kubenswrapper[5114]: I1210 15:48:25.555804 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:48:25 crc kubenswrapper[5114]: I1210 15:48:25.556915 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:48:25 crc kubenswrapper[5114]: I1210 15:48:25.605717 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44702: no serving certificate available for the kubelet" Dec 10 15:48:25 crc kubenswrapper[5114]: I1210 15:48:25.653091 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:25 crc kubenswrapper[5114]: E1210 15:48:25.653400 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:26.153383748 +0000 UTC m=+131.874184925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:25 crc kubenswrapper[5114]: I1210 15:48:25.744868 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjs2g" Dec 10 15:48:25 crc kubenswrapper[5114]: I1210 15:48:25.751806 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 10 15:48:25 crc kubenswrapper[5114]: I1210 15:48:25.754546 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:25 crc kubenswrapper[5114]: E1210 15:48:25.754832 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:26.254818969 +0000 UTC m=+131.975620146 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:25 crc kubenswrapper[5114]: I1210 15:48:25.790670 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:48:25 crc kubenswrapper[5114]: I1210 15:48:25.856500 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:25 crc kubenswrapper[5114]: E1210 15:48:25.856724 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:26.356693151 +0000 UTC m=+132.077494328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:25 crc kubenswrapper[5114]: I1210 15:48:25.857527 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:25 crc kubenswrapper[5114]: E1210 15:48:25.858037 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:26.358017465 +0000 UTC m=+132.078818632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:25 crc kubenswrapper[5114]: I1210 15:48:25.959335 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:25 crc kubenswrapper[5114]: E1210 15:48:25.959579 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:26.459546578 +0000 UTC m=+132.180347755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:25 crc kubenswrapper[5114]: I1210 15:48:25.960140 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:25 crc kubenswrapper[5114]: E1210 15:48:25.960671 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:26.460651556 +0000 UTC m=+132.181452733 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.061237 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:26 crc kubenswrapper[5114]: E1210 15:48:26.061454 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:26.56142893 +0000 UTC m=+132.282230107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.061548 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:26 crc kubenswrapper[5114]: E1210 15:48:26.061844 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:26.561832341 +0000 UTC m=+132.282633508 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.162387 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:26 crc kubenswrapper[5114]: E1210 15:48:26.162552 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:26.662523892 +0000 UTC m=+132.383325069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.162665 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:26 crc kubenswrapper[5114]: E1210 15:48:26.162977 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:26.662963093 +0000 UTC m=+132.383764270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.263878 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:26 crc kubenswrapper[5114]: E1210 15:48:26.264039 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:26.764005714 +0000 UTC m=+132.484806891 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.264462 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:26 crc kubenswrapper[5114]: E1210 15:48:26.264789 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:26.764776773 +0000 UTC m=+132.485577950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.268874 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.339030 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.368993 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:26 crc kubenswrapper[5114]: E1210 15:48:26.369299 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:26.869256691 +0000 UTC m=+132.590057888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.384553 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tkn7z"] Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.384598 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qbbrv"] Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.385844 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-zw42q" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.386816 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tkn7z" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.389701 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gn7sf" event={"ID":"949ddda2-62c3-484c-9034-3b447502cf4d","Type":"ContainerStarted","Data":"757b4dbf81394fffa875318b552260c6723fb5afa0fa7e83b4e93757491a6053"} Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.389736 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"94dee5e7-4d12-43db-83e6-44b77c7ae2ce","Type":"ContainerStarted","Data":"cb1caf26f424dad03da3c3a3faa2b3e2a18681c8f13179e5c95081ef0412f6b8"} Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.389769 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qbbrv"] Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.389786 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfhws" event={"ID":"bc6eba38-9248-4153-acdb-87d7acc29df0","Type":"ContainerStarted","Data":"a81b9af3f58df1d5eac84883fe4f247acdcb4959fab2b89e0bf720d5b42caf2d"} Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.389800 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-zw42q" event={"ID":"79e5de70-9480-4091-8467-73e7b3d12424","Type":"ContainerDied","Data":"25349e22d7ba1458461c8ef53c8b80553ecf8eea4f9c0188dba325b0af19573c"} Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.389824 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25349e22d7ba1458461c8ef53c8b80553ecf8eea4f9c0188dba325b0af19573c" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.389837 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-clgwg" event={"ID":"6568bc5a-ae55-48c0-b351-c5fbfafc3a6e","Type":"ContainerStarted","Data":"294a415a4a4962292c06366d514e97e0e19a65fc335a73732416e1f29889f00f"} Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.389854 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dvt8r" event={"ID":"44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e","Type":"ContainerStarted","Data":"88ee232a4e2c6caf16fb1a2ecfd2b8b06a22f0cf753d5ba9e45cf1b57461d0a7"} Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.390469 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qbbrv" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.400901 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.467436 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-v9phm" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.470110 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.470156 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlzmw\" (UniqueName: \"kubernetes.io/projected/270b074f-91f5-4ea6-b465-b0cc4a81f016-kube-api-access-jlzmw\") pod \"redhat-marketplace-tkn7z\" (UID: \"270b074f-91f5-4ea6-b465-b0cc4a81f016\") " pod="openshift-marketplace/redhat-marketplace-tkn7z" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.470225 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/270b074f-91f5-4ea6-b465-b0cc4a81f016-catalog-content\") pod \"redhat-marketplace-tkn7z\" (UID: \"270b074f-91f5-4ea6-b465-b0cc4a81f016\") " pod="openshift-marketplace/redhat-marketplace-tkn7z" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.470303 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d38bc69a-988a-4bdc-9141-dc5d0019908e-catalog-content\") pod \"redhat-marketplace-qbbrv\" (UID: \"d38bc69a-988a-4bdc-9141-dc5d0019908e\") " pod="openshift-marketplace/redhat-marketplace-qbbrv" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.470352 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d38bc69a-988a-4bdc-9141-dc5d0019908e-utilities\") pod \"redhat-marketplace-qbbrv\" (UID: \"d38bc69a-988a-4bdc-9141-dc5d0019908e\") " pod="openshift-marketplace/redhat-marketplace-qbbrv" Dec 10 15:48:26 crc kubenswrapper[5114]: E1210 15:48:26.470520 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:26.970503337 +0000 UTC m=+132.691304544 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.470718 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/270b074f-91f5-4ea6-b465-b0cc4a81f016-utilities\") pod \"redhat-marketplace-tkn7z\" (UID: \"270b074f-91f5-4ea6-b465-b0cc4a81f016\") " pod="openshift-marketplace/redhat-marketplace-tkn7z" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.470867 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxq2x\" (UniqueName: \"kubernetes.io/projected/d38bc69a-988a-4bdc-9141-dc5d0019908e-kube-api-access-qxq2x\") pod \"redhat-marketplace-qbbrv\" (UID: \"d38bc69a-988a-4bdc-9141-dc5d0019908e\") " pod="openshift-marketplace/redhat-marketplace-qbbrv" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.523368 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-crc" podStartSLOduration=4.5233522619999995 podStartE2EDuration="4.523352262s" podCreationTimestamp="2025-12-10 15:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:26.520972431 +0000 UTC m=+132.241773628" watchObservedRunningTime="2025-12-10 15:48:26.523352262 +0000 UTC m=+132.244153429" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.542205 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g2zlq"] Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.574424 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.574889 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jlzmw\" (UniqueName: \"kubernetes.io/projected/270b074f-91f5-4ea6-b465-b0cc4a81f016-kube-api-access-jlzmw\") pod \"redhat-marketplace-tkn7z\" (UID: \"270b074f-91f5-4ea6-b465-b0cc4a81f016\") " pod="openshift-marketplace/redhat-marketplace-tkn7z" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.574953 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/270b074f-91f5-4ea6-b465-b0cc4a81f016-catalog-content\") pod \"redhat-marketplace-tkn7z\" (UID: \"270b074f-91f5-4ea6-b465-b0cc4a81f016\") " pod="openshift-marketplace/redhat-marketplace-tkn7z" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.574997 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d38bc69a-988a-4bdc-9141-dc5d0019908e-catalog-content\") pod \"redhat-marketplace-qbbrv\" (UID: \"d38bc69a-988a-4bdc-9141-dc5d0019908e\") " pod="openshift-marketplace/redhat-marketplace-qbbrv" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.575042 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d38bc69a-988a-4bdc-9141-dc5d0019908e-utilities\") pod \"redhat-marketplace-qbbrv\" (UID: \"d38bc69a-988a-4bdc-9141-dc5d0019908e\") " pod="openshift-marketplace/redhat-marketplace-qbbrv" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.575061 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/270b074f-91f5-4ea6-b465-b0cc4a81f016-utilities\") pod \"redhat-marketplace-tkn7z\" (UID: \"270b074f-91f5-4ea6-b465-b0cc4a81f016\") " pod="openshift-marketplace/redhat-marketplace-tkn7z" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.575086 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qxq2x\" (UniqueName: \"kubernetes.io/projected/d38bc69a-988a-4bdc-9141-dc5d0019908e-kube-api-access-qxq2x\") pod \"redhat-marketplace-qbbrv\" (UID: \"d38bc69a-988a-4bdc-9141-dc5d0019908e\") " pod="openshift-marketplace/redhat-marketplace-qbbrv" Dec 10 15:48:26 crc kubenswrapper[5114]: E1210 15:48:26.575539 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:27.075512328 +0000 UTC m=+132.796313505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.576241 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/270b074f-91f5-4ea6-b465-b0cc4a81f016-catalog-content\") pod \"redhat-marketplace-tkn7z\" (UID: \"270b074f-91f5-4ea6-b465-b0cc4a81f016\") " pod="openshift-marketplace/redhat-marketplace-tkn7z" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.576676 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d38bc69a-988a-4bdc-9141-dc5d0019908e-catalog-content\") pod \"redhat-marketplace-qbbrv\" (UID: \"d38bc69a-988a-4bdc-9141-dc5d0019908e\") " pod="openshift-marketplace/redhat-marketplace-qbbrv" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.578074 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/270b074f-91f5-4ea6-b465-b0cc4a81f016-utilities\") pod \"redhat-marketplace-tkn7z\" (UID: \"270b074f-91f5-4ea6-b465-b0cc4a81f016\") " pod="openshift-marketplace/redhat-marketplace-tkn7z" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.585843 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d38bc69a-988a-4bdc-9141-dc5d0019908e-utilities\") pod \"redhat-marketplace-qbbrv\" (UID: \"d38bc69a-988a-4bdc-9141-dc5d0019908e\") " pod="openshift-marketplace/redhat-marketplace-qbbrv" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.596538 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g2zlq" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.604197 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.654317 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlzmw\" (UniqueName: \"kubernetes.io/projected/270b074f-91f5-4ea6-b465-b0cc4a81f016-kube-api-access-jlzmw\") pod \"redhat-marketplace-tkn7z\" (UID: \"270b074f-91f5-4ea6-b465-b0cc4a81f016\") " pod="openshift-marketplace/redhat-marketplace-tkn7z" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.661214 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxq2x\" (UniqueName: \"kubernetes.io/projected/d38bc69a-988a-4bdc-9141-dc5d0019908e-kube-api-access-qxq2x\") pod \"redhat-marketplace-qbbrv\" (UID: \"d38bc69a-988a-4bdc-9141-dc5d0019908e\") " pod="openshift-marketplace/redhat-marketplace-qbbrv" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.679097 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.679193 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c04642b-9dc3-4509-a6d8-b03df365d743-utilities\") pod \"redhat-operators-g2zlq\" (UID: \"3c04642b-9dc3-4509-a6d8-b03df365d743\") " pod="openshift-marketplace/redhat-operators-g2zlq" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.679243 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnsxn\" (UniqueName: \"kubernetes.io/projected/3c04642b-9dc3-4509-a6d8-b03df365d743-kube-api-access-wnsxn\") pod \"redhat-operators-g2zlq\" (UID: \"3c04642b-9dc3-4509-a6d8-b03df365d743\") " pod="openshift-marketplace/redhat-operators-g2zlq" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.679296 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c04642b-9dc3-4509-a6d8-b03df365d743-catalog-content\") pod \"redhat-operators-g2zlq\" (UID: \"3c04642b-9dc3-4509-a6d8-b03df365d743\") " pod="openshift-marketplace/redhat-operators-g2zlq" Dec 10 15:48:26 crc kubenswrapper[5114]: E1210 15:48:26.679593 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:27.179579506 +0000 UTC m=+132.900380683 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.707795 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g2zlq"] Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.783546 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:26 crc kubenswrapper[5114]: E1210 15:48:26.783714 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:27.283688064 +0000 UTC m=+133.004489261 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.783801 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c04642b-9dc3-4509-a6d8-b03df365d743-utilities\") pod \"redhat-operators-g2zlq\" (UID: \"3c04642b-9dc3-4509-a6d8-b03df365d743\") " pod="openshift-marketplace/redhat-operators-g2zlq" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.783856 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wnsxn\" (UniqueName: \"kubernetes.io/projected/3c04642b-9dc3-4509-a6d8-b03df365d743-kube-api-access-wnsxn\") pod \"redhat-operators-g2zlq\" (UID: \"3c04642b-9dc3-4509-a6d8-b03df365d743\") " pod="openshift-marketplace/redhat-operators-g2zlq" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.783908 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c04642b-9dc3-4509-a6d8-b03df365d743-catalog-content\") pod \"redhat-operators-g2zlq\" (UID: \"3c04642b-9dc3-4509-a6d8-b03df365d743\") " pod="openshift-marketplace/redhat-operators-g2zlq" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.783953 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:26 crc kubenswrapper[5114]: E1210 15:48:26.784233 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:27.284224188 +0000 UTC m=+133.005025375 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.784768 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c04642b-9dc3-4509-a6d8-b03df365d743-utilities\") pod \"redhat-operators-g2zlq\" (UID: \"3c04642b-9dc3-4509-a6d8-b03df365d743\") " pod="openshift-marketplace/redhat-operators-g2zlq" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.785444 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c04642b-9dc3-4509-a6d8-b03df365d743-catalog-content\") pod \"redhat-operators-g2zlq\" (UID: \"3c04642b-9dc3-4509-a6d8-b03df365d743\") " pod="openshift-marketplace/redhat-operators-g2zlq" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.811315 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnsxn\" (UniqueName: \"kubernetes.io/projected/3c04642b-9dc3-4509-a6d8-b03df365d743-kube-api-access-wnsxn\") pod \"redhat-operators-g2zlq\" (UID: \"3c04642b-9dc3-4509-a6d8-b03df365d743\") " pod="openshift-marketplace/redhat-operators-g2zlq" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.885109 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:26 crc kubenswrapper[5114]: E1210 15:48:26.885713 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:27.38569511 +0000 UTC m=+133.106496287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.928767 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f9h94"] Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.941954 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f9h94"] Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.942123 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9h94" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.986578 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af5ea968-fe23-45bd-9ecd-8798399151e6-utilities\") pod \"redhat-operators-f9h94\" (UID: \"af5ea968-fe23-45bd-9ecd-8798399151e6\") " pod="openshift-marketplace/redhat-operators-f9h94" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.986637 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc72d\" (UniqueName: \"kubernetes.io/projected/af5ea968-fe23-45bd-9ecd-8798399151e6-kube-api-access-gc72d\") pod \"redhat-operators-f9h94\" (UID: \"af5ea968-fe23-45bd-9ecd-8798399151e6\") " pod="openshift-marketplace/redhat-operators-f9h94" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.986739 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:26 crc kubenswrapper[5114]: I1210 15:48:26.986870 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af5ea968-fe23-45bd-9ecd-8798399151e6-catalog-content\") pod \"redhat-operators-f9h94\" (UID: \"af5ea968-fe23-45bd-9ecd-8798399151e6\") " pod="openshift-marketplace/redhat-operators-f9h94" Dec 10 15:48:26 crc kubenswrapper[5114]: E1210 15:48:26.987398 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:27.487381877 +0000 UTC m=+133.208183054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.088196 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.088466 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af5ea968-fe23-45bd-9ecd-8798399151e6-catalog-content\") pod \"redhat-operators-f9h94\" (UID: \"af5ea968-fe23-45bd-9ecd-8798399151e6\") " pod="openshift-marketplace/redhat-operators-f9h94" Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.088535 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af5ea968-fe23-45bd-9ecd-8798399151e6-utilities\") pod \"redhat-operators-f9h94\" (UID: \"af5ea968-fe23-45bd-9ecd-8798399151e6\") " pod="openshift-marketplace/redhat-operators-f9h94" Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.088566 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gc72d\" (UniqueName: \"kubernetes.io/projected/af5ea968-fe23-45bd-9ecd-8798399151e6-kube-api-access-gc72d\") pod \"redhat-operators-f9h94\" (UID: \"af5ea968-fe23-45bd-9ecd-8798399151e6\") " pod="openshift-marketplace/redhat-operators-f9h94" Dec 10 15:48:27 crc kubenswrapper[5114]: E1210 15:48:27.088916 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:27.58889592 +0000 UTC m=+133.309697097 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.089427 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af5ea968-fe23-45bd-9ecd-8798399151e6-catalog-content\") pod \"redhat-operators-f9h94\" (UID: \"af5ea968-fe23-45bd-9ecd-8798399151e6\") " pod="openshift-marketplace/redhat-operators-f9h94" Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.089771 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af5ea968-fe23-45bd-9ecd-8798399151e6-utilities\") pod \"redhat-operators-f9h94\" (UID: \"af5ea968-fe23-45bd-9ecd-8798399151e6\") " pod="openshift-marketplace/redhat-operators-f9h94" Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.110571 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc72d\" (UniqueName: \"kubernetes.io/projected/af5ea968-fe23-45bd-9ecd-8798399151e6-kube-api-access-gc72d\") pod \"redhat-operators-f9h94\" (UID: \"af5ea968-fe23-45bd-9ecd-8798399151e6\") " pod="openshift-marketplace/redhat-operators-f9h94" Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.122004 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qbbrv" Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.126686 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tkn7z" Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.151103 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gjs2g"] Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.157158 5114 patch_prober.go:28] interesting pod/downloads-747b44746d-7nbcs container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.157216 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-7nbcs" podUID="d68dcc8d-b977-44e9-a63c-1cee775b50f2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.157444 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g2zlq" Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.172921 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9h94" Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.191442 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:27 crc kubenswrapper[5114]: E1210 15:48:27.192209 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:27.692195448 +0000 UTC m=+133.412996625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.293941 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:27 crc kubenswrapper[5114]: E1210 15:48:27.294129 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:27.79408458 +0000 UTC m=+133.514885767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.294616 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:27 crc kubenswrapper[5114]: E1210 15:48:27.295078 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:27.795067805 +0000 UTC m=+133.515868992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.352397 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-j45nf" event={"ID":"2e757457-618f-4625-8008-3cb8989aa882","Type":"ContainerStarted","Data":"298aa02f2035330900db47c9de581782dac8cb50765be4afe3137d4bdacf194b"} Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.366336 5114 generic.go:358] "Generic (PLEG): container finished" podID="bc6eba38-9248-4153-acdb-87d7acc29df0" containerID="b666afe5c1f16390566efd7cf85aeefb2480355c505804c7413f545a7ef08455" exitCode=0 Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.367123 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfhws" event={"ID":"bc6eba38-9248-4153-acdb-87d7acc29df0","Type":"ContainerDied","Data":"b666afe5c1f16390566efd7cf85aeefb2480355c505804c7413f545a7ef08455"} Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.369626 5114 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.372161 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"d5a5a615cc474b6e01a97fd757ffb81e9a77c54fa9ce31cfeea0839bf7a1ce27"} Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.376102 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gjs2g" event={"ID":"48d8f4a9-0b40-486c-ac70-597d1fab05c1","Type":"ContainerStarted","Data":"77fba1f783b92f325edcacbd10e634b7a444d2ab02cc22d55001357235692db8"} Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.384441 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"b66c0625acd252eaf5901866d8805e158212952a0b59f304dbdd2f68cb61657c"} Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.389219 5114 generic.go:358] "Generic (PLEG): container finished" podID="6568bc5a-ae55-48c0-b351-c5fbfafc3a6e" containerID="4818d2f2859cdd56a26f14dc865a72393e3a30d570ec539c8ddd866ad8414488" exitCode=0 Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.389643 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-clgwg" event={"ID":"6568bc5a-ae55-48c0-b351-c5fbfafc3a6e","Type":"ContainerDied","Data":"4818d2f2859cdd56a26f14dc865a72393e3a30d570ec539c8ddd866ad8414488"} Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.397206 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:27 crc kubenswrapper[5114]: E1210 15:48:27.397553 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:27.897531452 +0000 UTC m=+133.618332639 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.400998 5114 generic.go:358] "Generic (PLEG): container finished" podID="44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e" containerID="620a8eede76fa27029317a1c42e6ea8bc13d5b1dccd01add92058829bd04f03a" exitCode=0 Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.401156 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dvt8r" event={"ID":"44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e","Type":"ContainerDied","Data":"620a8eede76fa27029317a1c42e6ea8bc13d5b1dccd01add92058829bd04f03a"} Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.404854 5114 generic.go:358] "Generic (PLEG): container finished" podID="949ddda2-62c3-484c-9034-3b447502cf4d" containerID="3f63220ad1e81cf1ec1cf51fdf80bbe382704c1b95e63dfef1571f666821dcb6" exitCode=0 Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.404943 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gn7sf" event={"ID":"949ddda2-62c3-484c-9034-3b447502cf4d","Type":"ContainerDied","Data":"3f63220ad1e81cf1ec1cf51fdf80bbe382704c1b95e63dfef1571f666821dcb6"} Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.408646 5114 generic.go:358] "Generic (PLEG): container finished" podID="94dee5e7-4d12-43db-83e6-44b77c7ae2ce" containerID="cb1caf26f424dad03da3c3a3faa2b3e2a18681c8f13179e5c95081ef0412f6b8" exitCode=0 Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.410442 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"94dee5e7-4d12-43db-83e6-44b77c7ae2ce","Type":"ContainerDied","Data":"cb1caf26f424dad03da3c3a3faa2b3e2a18681c8f13179e5c95081ef0412f6b8"} Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.500758 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:27 crc kubenswrapper[5114]: E1210 15:48:27.501833 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:48:28.001816895 +0000 UTC m=+133.722618072 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-2tbm6" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.540035 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qbbrv"] Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.569443 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-59hqn" Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.569475 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-59hqn" Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.581107 5114 patch_prober.go:28] interesting pod/console-64d44f6ddf-59hqn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.581167 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-59hqn" podUID="b2b61e86-45b8-4491-8236-f056a381a5ab" containerName="console" probeResult="failure" output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.601397 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:27 crc kubenswrapper[5114]: E1210 15:48:27.601875 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-10 15:48:28.101857971 +0000 UTC m=+133.822659148 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.686549 5114 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-10T15:48:27.369645148Z","UUID":"d053cf3a-ea68-4983-abe1-f92132d337e4","Handler":null,"Name":"","Endpoint":""} Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.689068 5114 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.689104 5114 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.704356 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.706649 5114 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.706690 5114 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.740695 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-2tbm6\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.800015 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f9h94"] Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.806048 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.807637 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tkn7z"] Dec 10 15:48:27 crc kubenswrapper[5114]: W1210 15:48:27.812335 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf5ea968_fe23_45bd_9ecd_8798399151e6.slice/crio-e4813820b4848acb02a7b2dd137f43301b6aac72e597ee7c1a802c9084744038 WatchSource:0}: Error finding container e4813820b4848acb02a7b2dd137f43301b6aac72e597ee7c1a802c9084744038: Status 404 returned error can't find the container with id e4813820b4848acb02a7b2dd137f43301b6aac72e597ee7c1a802c9084744038 Dec 10 15:48:27 crc kubenswrapper[5114]: W1210 15:48:27.815322 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod270b074f_91f5_4ea6_b465_b0cc4a81f016.slice/crio-1760c5e1fc86138734cdd0f8e12dd02fd244e312b26e79b7553f23e6d27d4d26 WatchSource:0}: Error finding container 1760c5e1fc86138734cdd0f8e12dd02fd244e312b26e79b7553f23e6d27d4d26: Status 404 returned error can't find the container with id 1760c5e1fc86138734cdd0f8e12dd02fd244e312b26e79b7553f23e6d27d4d26 Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.820229 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.829175 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.849181 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 10 15:48:27 crc kubenswrapper[5114]: I1210 15:48:27.918938 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g2zlq"] Dec 10 15:48:28 crc kubenswrapper[5114]: I1210 15:48:28.189736 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44716: no serving certificate available for the kubelet" Dec 10 15:48:28 crc kubenswrapper[5114]: I1210 15:48:28.212851 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-2tbm6"] Dec 10 15:48:28 crc kubenswrapper[5114]: W1210 15:48:28.237086 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64a2e767_3d9b_4af5_8889_ab3f2b41a071.slice/crio-64126368dab96d8061bec79c4c3444ce34645d016014fe506e405fd0f9e6f281 WatchSource:0}: Error finding container 64126368dab96d8061bec79c4c3444ce34645d016014fe506e405fd0f9e6f281: Status 404 returned error can't find the container with id 64126368dab96d8061bec79c4c3444ce34645d016014fe506e405fd0f9e6f281 Dec 10 15:48:28 crc kubenswrapper[5114]: I1210 15:48:28.429441 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gjs2g" event={"ID":"48d8f4a9-0b40-486c-ac70-597d1fab05c1","Type":"ContainerStarted","Data":"ad9560ace4a4a9114ef033ae2a8135dbe5f125aebfcf243917919efa8c654834"} Dec 10 15:48:28 crc kubenswrapper[5114]: I1210 15:48:28.429518 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gjs2g" event={"ID":"48d8f4a9-0b40-486c-ac70-597d1fab05c1","Type":"ContainerStarted","Data":"58629a078f11c048c3f0355998ffb396a8cbf0cd4a5958ab780648179b4da760"} Dec 10 15:48:28 crc kubenswrapper[5114]: I1210 15:48:28.460197 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-gjs2g" podStartSLOduration=115.4601778 podStartE2EDuration="1m55.4601778s" podCreationTimestamp="2025-12-10 15:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:28.45618954 +0000 UTC m=+134.176990707" watchObservedRunningTime="2025-12-10 15:48:28.4601778 +0000 UTC m=+134.180978987" Dec 10 15:48:28 crc kubenswrapper[5114]: I1210 15:48:28.469070 5114 generic.go:358] "Generic (PLEG): container finished" podID="3c04642b-9dc3-4509-a6d8-b03df365d743" containerID="342b102ce277959b3fdbb2fa69e6c49a99fc26760492e91630ef0f636ac97563" exitCode=0 Dec 10 15:48:28 crc kubenswrapper[5114]: I1210 15:48:28.469201 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2zlq" event={"ID":"3c04642b-9dc3-4509-a6d8-b03df365d743","Type":"ContainerDied","Data":"342b102ce277959b3fdbb2fa69e6c49a99fc26760492e91630ef0f636ac97563"} Dec 10 15:48:28 crc kubenswrapper[5114]: I1210 15:48:28.469229 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2zlq" event={"ID":"3c04642b-9dc3-4509-a6d8-b03df365d743","Type":"ContainerStarted","Data":"c8cd0d95fdff99ee81a255c083c465f429e6c06c70da2d1b0bf9fcb16d67944e"} Dec 10 15:48:28 crc kubenswrapper[5114]: I1210 15:48:28.481593 5114 generic.go:358] "Generic (PLEG): container finished" podID="d38bc69a-988a-4bdc-9141-dc5d0019908e" containerID="acec7dad167d0b7bce6874e2e0eba5c8bf358b813840242b5d494056fd33b5f1" exitCode=0 Dec 10 15:48:28 crc kubenswrapper[5114]: I1210 15:48:28.481726 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qbbrv" event={"ID":"d38bc69a-988a-4bdc-9141-dc5d0019908e","Type":"ContainerDied","Data":"acec7dad167d0b7bce6874e2e0eba5c8bf358b813840242b5d494056fd33b5f1"} Dec 10 15:48:28 crc kubenswrapper[5114]: I1210 15:48:28.481781 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qbbrv" event={"ID":"d38bc69a-988a-4bdc-9141-dc5d0019908e","Type":"ContainerStarted","Data":"570347bc52216d64f007fa1f9cf2b836e1e8463f2006243f0ed527757cc868de"} Dec 10 15:48:28 crc kubenswrapper[5114]: I1210 15:48:28.495767 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"debcc5aa527cee271c1148da73b83503e2807717a830f4836212e902038db322"} Dec 10 15:48:28 crc kubenswrapper[5114]: I1210 15:48:28.507291 5114 generic.go:358] "Generic (PLEG): container finished" podID="270b074f-91f5-4ea6-b465-b0cc4a81f016" containerID="cf79a6e133b6ee0cac2f597eebef4fd8d870abdcec2209f79b5867a88ddb3c3f" exitCode=0 Dec 10 15:48:28 crc kubenswrapper[5114]: I1210 15:48:28.507422 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tkn7z" event={"ID":"270b074f-91f5-4ea6-b465-b0cc4a81f016","Type":"ContainerDied","Data":"cf79a6e133b6ee0cac2f597eebef4fd8d870abdcec2209f79b5867a88ddb3c3f"} Dec 10 15:48:28 crc kubenswrapper[5114]: I1210 15:48:28.507452 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tkn7z" event={"ID":"270b074f-91f5-4ea6-b465-b0cc4a81f016","Type":"ContainerStarted","Data":"1760c5e1fc86138734cdd0f8e12dd02fd244e312b26e79b7553f23e6d27d4d26"} Dec 10 15:48:28 crc kubenswrapper[5114]: I1210 15:48:28.519079 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" event={"ID":"64a2e767-3d9b-4af5-8889-ab3f2b41a071","Type":"ContainerStarted","Data":"64126368dab96d8061bec79c4c3444ce34645d016014fe506e405fd0f9e6f281"} Dec 10 15:48:28 crc kubenswrapper[5114]: I1210 15:48:28.522245 5114 generic.go:358] "Generic (PLEG): container finished" podID="af5ea968-fe23-45bd-9ecd-8798399151e6" containerID="7e80796ee88d4b41491d97fd417dfc84667a2b4e3b3e13d4b4c8d40749f31cb5" exitCode=0 Dec 10 15:48:28 crc kubenswrapper[5114]: I1210 15:48:28.522484 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9h94" event={"ID":"af5ea968-fe23-45bd-9ecd-8798399151e6","Type":"ContainerDied","Data":"7e80796ee88d4b41491d97fd417dfc84667a2b4e3b3e13d4b4c8d40749f31cb5"} Dec 10 15:48:28 crc kubenswrapper[5114]: I1210 15:48:28.522565 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9h94" event={"ID":"af5ea968-fe23-45bd-9ecd-8798399151e6","Type":"ContainerStarted","Data":"e4813820b4848acb02a7b2dd137f43301b6aac72e597ee7c1a802c9084744038"} Dec 10 15:48:28 crc kubenswrapper[5114]: I1210 15:48:28.539954 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-j45nf" event={"ID":"2e757457-618f-4625-8008-3cb8989aa882","Type":"ContainerStarted","Data":"10d016ff3f7e33103468731bf0837c07e7558b6646cd930cc1d03537966327dc"} Dec 10 15:48:28 crc kubenswrapper[5114]: I1210 15:48:28.541389 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"8c27af46abb25fd314b04262fd783a88f1404c2bdad09494c8e3ecd337e1c71e"} Dec 10 15:48:28 crc kubenswrapper[5114]: I1210 15:48:28.547316 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"f28bace7cece1da81792691b03c0f89100e719f69365c703d86c85091d332f13"} Dec 10 15:48:28 crc kubenswrapper[5114]: I1210 15:48:28.547372 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"b6dea43a9415a7d3a2eed12dafedcb4713a4f255fff25fa91d16f9bf593b3024"} Dec 10 15:48:28 crc kubenswrapper[5114]: I1210 15:48:28.547658 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:48:28 crc kubenswrapper[5114]: I1210 15:48:28.614111 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Dec 10 15:48:28 crc kubenswrapper[5114]: I1210 15:48:28.961011 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 10 15:48:29 crc kubenswrapper[5114]: I1210 15:48:29.028355 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94dee5e7-4d12-43db-83e6-44b77c7ae2ce-kubelet-dir\") pod \"94dee5e7-4d12-43db-83e6-44b77c7ae2ce\" (UID: \"94dee5e7-4d12-43db-83e6-44b77c7ae2ce\") " Dec 10 15:48:29 crc kubenswrapper[5114]: I1210 15:48:29.028480 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94dee5e7-4d12-43db-83e6-44b77c7ae2ce-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "94dee5e7-4d12-43db-83e6-44b77c7ae2ce" (UID: "94dee5e7-4d12-43db-83e6-44b77c7ae2ce"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 15:48:29 crc kubenswrapper[5114]: I1210 15:48:29.028949 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94dee5e7-4d12-43db-83e6-44b77c7ae2ce-kube-api-access\") pod \"94dee5e7-4d12-43db-83e6-44b77c7ae2ce\" (UID: \"94dee5e7-4d12-43db-83e6-44b77c7ae2ce\") " Dec 10 15:48:29 crc kubenswrapper[5114]: I1210 15:48:29.029161 5114 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94dee5e7-4d12-43db-83e6-44b77c7ae2ce-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:29 crc kubenswrapper[5114]: I1210 15:48:29.049487 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94dee5e7-4d12-43db-83e6-44b77c7ae2ce-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "94dee5e7-4d12-43db-83e6-44b77c7ae2ce" (UID: "94dee5e7-4d12-43db-83e6-44b77c7ae2ce"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:48:29 crc kubenswrapper[5114]: I1210 15:48:29.130752 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94dee5e7-4d12-43db-83e6-44b77c7ae2ce-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:29 crc kubenswrapper[5114]: I1210 15:48:29.483596 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 10 15:48:29 crc kubenswrapper[5114]: I1210 15:48:29.484304 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="94dee5e7-4d12-43db-83e6-44b77c7ae2ce" containerName="pruner" Dec 10 15:48:29 crc kubenswrapper[5114]: I1210 15:48:29.484317 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="94dee5e7-4d12-43db-83e6-44b77c7ae2ce" containerName="pruner" Dec 10 15:48:29 crc kubenswrapper[5114]: I1210 15:48:29.484451 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="94dee5e7-4d12-43db-83e6-44b77c7ae2ce" containerName="pruner" Dec 10 15:48:29 crc kubenswrapper[5114]: I1210 15:48:29.624433 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" event={"ID":"64a2e767-3d9b-4af5-8889-ab3f2b41a071","Type":"ContainerStarted","Data":"b5c085a6a942c7a987a05a5ea8dd9853f7b4cb2bb9e7eca8e3e8d0dd120285ac"} Dec 10 15:48:29 crc kubenswrapper[5114]: I1210 15:48:29.624503 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 10 15:48:29 crc kubenswrapper[5114]: I1210 15:48:29.624542 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"94dee5e7-4d12-43db-83e6-44b77c7ae2ce","Type":"ContainerDied","Data":"0019f0f9ccb9d6575223db6bd1b80823c26d2b87b81de862542d666c4be8dc5e"} Dec 10 15:48:29 crc kubenswrapper[5114]: I1210 15:48:29.624558 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0019f0f9ccb9d6575223db6bd1b80823c26d2b87b81de862542d666c4be8dc5e" Dec 10 15:48:29 crc kubenswrapper[5114]: I1210 15:48:29.624569 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-j45nf" event={"ID":"2e757457-618f-4625-8008-3cb8989aa882","Type":"ContainerStarted","Data":"6c7c5e5e4d2e4524af8cd55e274f6f412ce617742d70e4f03944a73cdf86042f"} Dec 10 15:48:29 crc kubenswrapper[5114]: I1210 15:48:29.624996 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 10 15:48:29 crc kubenswrapper[5114]: I1210 15:48:29.626531 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:29 crc kubenswrapper[5114]: I1210 15:48:29.626856 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 10 15:48:29 crc kubenswrapper[5114]: I1210 15:48:29.627483 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 10 15:48:29 crc kubenswrapper[5114]: I1210 15:48:29.627807 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 10 15:48:29 crc kubenswrapper[5114]: I1210 15:48:29.705690 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" podStartSLOduration=116.705668185 podStartE2EDuration="1m56.705668185s" podCreationTimestamp="2025-12-10 15:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:29.70269083 +0000 UTC m=+135.423492017" watchObservedRunningTime="2025-12-10 15:48:29.705668185 +0000 UTC m=+135.426469362" Dec 10 15:48:29 crc kubenswrapper[5114]: I1210 15:48:29.744623 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6cf7c5f1-602e-434e-be18-95fffa160cbc-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"6cf7c5f1-602e-434e-be18-95fffa160cbc\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 10 15:48:29 crc kubenswrapper[5114]: I1210 15:48:29.744741 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6cf7c5f1-602e-434e-be18-95fffa160cbc-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"6cf7c5f1-602e-434e-be18-95fffa160cbc\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 10 15:48:29 crc kubenswrapper[5114]: I1210 15:48:29.855828 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6cf7c5f1-602e-434e-be18-95fffa160cbc-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"6cf7c5f1-602e-434e-be18-95fffa160cbc\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 10 15:48:29 crc kubenswrapper[5114]: I1210 15:48:29.855921 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6cf7c5f1-602e-434e-be18-95fffa160cbc-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"6cf7c5f1-602e-434e-be18-95fffa160cbc\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 10 15:48:29 crc kubenswrapper[5114]: I1210 15:48:29.856580 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6cf7c5f1-602e-434e-be18-95fffa160cbc-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"6cf7c5f1-602e-434e-be18-95fffa160cbc\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 10 15:48:29 crc kubenswrapper[5114]: I1210 15:48:29.880246 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-dnk6l" Dec 10 15:48:29 crc kubenswrapper[5114]: I1210 15:48:29.887161 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6cf7c5f1-602e-434e-be18-95fffa160cbc-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"6cf7c5f1-602e-434e-be18-95fffa160cbc\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 10 15:48:29 crc kubenswrapper[5114]: I1210 15:48:29.904676 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-j45nf" podStartSLOduration=16.904650676 podStartE2EDuration="16.904650676s" podCreationTimestamp="2025-12-10 15:48:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:29.776756521 +0000 UTC m=+135.497557718" watchObservedRunningTime="2025-12-10 15:48:29.904650676 +0000 UTC m=+135.625451843" Dec 10 15:48:29 crc kubenswrapper[5114]: I1210 15:48:29.951474 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 10 15:48:30 crc kubenswrapper[5114]: I1210 15:48:30.439891 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 10 15:48:30 crc kubenswrapper[5114]: I1210 15:48:30.603539 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"6cf7c5f1-602e-434e-be18-95fffa160cbc","Type":"ContainerStarted","Data":"e8d504102f6b2a138115468ea7f205204828a97ccf6cd508a2c3dc0db4e7a2cc"} Dec 10 15:48:31 crc kubenswrapper[5114]: E1210 15:48:31.195941 5114 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="26ab4365d23e44d43ce8e063def011e6a231777f68cdaa667f129a81a2d63e65" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 10 15:48:31 crc kubenswrapper[5114]: E1210 15:48:31.201730 5114 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="26ab4365d23e44d43ce8e063def011e6a231777f68cdaa667f129a81a2d63e65" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 10 15:48:31 crc kubenswrapper[5114]: E1210 15:48:31.205680 5114 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="26ab4365d23e44d43ce8e063def011e6a231777f68cdaa667f129a81a2d63e65" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 10 15:48:31 crc kubenswrapper[5114]: E1210 15:48:31.205756 5114 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-55xzh" podUID="4ff01055-87cd-4379-ba86-8778485be566" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 10 15:48:32 crc kubenswrapper[5114]: I1210 15:48:32.228753 5114 patch_prober.go:28] interesting pod/downloads-747b44746d-7nbcs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 10 15:48:32 crc kubenswrapper[5114]: I1210 15:48:32.228826 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-7nbcs" podUID="d68dcc8d-b977-44e9-a63c-1cee775b50f2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 10 15:48:32 crc kubenswrapper[5114]: I1210 15:48:32.228966 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-wpjqd" Dec 10 15:48:33 crc kubenswrapper[5114]: I1210 15:48:33.334955 5114 ???:1] "http: TLS handshake error from 192.168.126.11:33658: no serving certificate available for the kubelet" Dec 10 15:48:33 crc kubenswrapper[5114]: I1210 15:48:33.629865 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"6cf7c5f1-602e-434e-be18-95fffa160cbc","Type":"ContainerStarted","Data":"575e57ea33f7d9b1f75838d44464b0104b7fd9ba0ae5a614424241f46185aa1b"} Dec 10 15:48:33 crc kubenswrapper[5114]: I1210 15:48:33.645735 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=4.645701799 podStartE2EDuration="4.645701799s" podCreationTimestamp="2025-12-10 15:48:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:33.645296208 +0000 UTC m=+139.366097385" watchObservedRunningTime="2025-12-10 15:48:33.645701799 +0000 UTC m=+139.366502966" Dec 10 15:48:35 crc kubenswrapper[5114]: E1210 15:48:35.105079 5114 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b6e28a6_b1a9_4942_8457_e54258393016.slice/crio-1b1a8fa0e80fd36fe13e3dd77a7af89a418a45139b9e394260c5c24cb90fde7c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b6e28a6_b1a9_4942_8457_e54258393016.slice/crio-conmon-1b1a8fa0e80fd36fe13e3dd77a7af89a418a45139b9e394260c5c24cb90fde7c.scope\": RecentStats: unable to find data in memory cache]" Dec 10 15:48:35 crc kubenswrapper[5114]: I1210 15:48:35.641816 5114 generic.go:358] "Generic (PLEG): container finished" podID="6cf7c5f1-602e-434e-be18-95fffa160cbc" containerID="575e57ea33f7d9b1f75838d44464b0104b7fd9ba0ae5a614424241f46185aa1b" exitCode=0 Dec 10 15:48:35 crc kubenswrapper[5114]: I1210 15:48:35.641902 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"6cf7c5f1-602e-434e-be18-95fffa160cbc","Type":"ContainerDied","Data":"575e57ea33f7d9b1f75838d44464b0104b7fd9ba0ae5a614424241f46185aa1b"} Dec 10 15:48:36 crc kubenswrapper[5114]: I1210 15:48:36.404086 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-d6hj2"] Dec 10 15:48:36 crc kubenswrapper[5114]: I1210 15:48:36.405329 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-d6hj2" podUID="c7d243eb-5e31-4635-803d-2408fe9f8575" containerName="controller-manager" containerID="cri-o://3ea53b882c0d4708da03ff6debf88c4d4dc62d0642bf9df19aefd52d83cba5bd" gracePeriod=30 Dec 10 15:48:36 crc kubenswrapper[5114]: I1210 15:48:36.426495 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-gzfvl"] Dec 10 15:48:36 crc kubenswrapper[5114]: I1210 15:48:36.426827 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gzfvl" podUID="b73fffad-3220-4b21-9fd2-046191bf30ab" containerName="route-controller-manager" containerID="cri-o://b4342ae6d45ec4f4552726cda9c058e3420d34a7777c23f3badffae944b2406d" gracePeriod=30 Dec 10 15:48:37 crc kubenswrapper[5114]: I1210 15:48:37.573447 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-59hqn" Dec 10 15:48:37 crc kubenswrapper[5114]: I1210 15:48:37.577557 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-59hqn" Dec 10 15:48:37 crc kubenswrapper[5114]: I1210 15:48:37.663393 5114 generic.go:358] "Generic (PLEG): container finished" podID="b73fffad-3220-4b21-9fd2-046191bf30ab" containerID="b4342ae6d45ec4f4552726cda9c058e3420d34a7777c23f3badffae944b2406d" exitCode=0 Dec 10 15:48:37 crc kubenswrapper[5114]: I1210 15:48:37.664214 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gzfvl" event={"ID":"b73fffad-3220-4b21-9fd2-046191bf30ab","Type":"ContainerDied","Data":"b4342ae6d45ec4f4552726cda9c058e3420d34a7777c23f3badffae944b2406d"} Dec 10 15:48:37 crc kubenswrapper[5114]: I1210 15:48:37.878348 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 10 15:48:37 crc kubenswrapper[5114]: I1210 15:48:37.986189 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6cf7c5f1-602e-434e-be18-95fffa160cbc-kubelet-dir\") pod \"6cf7c5f1-602e-434e-be18-95fffa160cbc\" (UID: \"6cf7c5f1-602e-434e-be18-95fffa160cbc\") " Dec 10 15:48:37 crc kubenswrapper[5114]: I1210 15:48:37.986385 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6cf7c5f1-602e-434e-be18-95fffa160cbc-kube-api-access\") pod \"6cf7c5f1-602e-434e-be18-95fffa160cbc\" (UID: \"6cf7c5f1-602e-434e-be18-95fffa160cbc\") " Dec 10 15:48:37 crc kubenswrapper[5114]: I1210 15:48:37.986493 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cf7c5f1-602e-434e-be18-95fffa160cbc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6cf7c5f1-602e-434e-be18-95fffa160cbc" (UID: "6cf7c5f1-602e-434e-be18-95fffa160cbc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 15:48:37 crc kubenswrapper[5114]: I1210 15:48:37.986719 5114 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6cf7c5f1-602e-434e-be18-95fffa160cbc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:37 crc kubenswrapper[5114]: I1210 15:48:37.999294 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cf7c5f1-602e-434e-be18-95fffa160cbc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6cf7c5f1-602e-434e-be18-95fffa160cbc" (UID: "6cf7c5f1-602e-434e-be18-95fffa160cbc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:48:38 crc kubenswrapper[5114]: I1210 15:48:38.046469 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:48:38 crc kubenswrapper[5114]: I1210 15:48:38.099628 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6cf7c5f1-602e-434e-be18-95fffa160cbc-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:38 crc kubenswrapper[5114]: I1210 15:48:38.670168 5114 generic.go:358] "Generic (PLEG): container finished" podID="c7d243eb-5e31-4635-803d-2408fe9f8575" containerID="3ea53b882c0d4708da03ff6debf88c4d4dc62d0642bf9df19aefd52d83cba5bd" exitCode=0 Dec 10 15:48:38 crc kubenswrapper[5114]: I1210 15:48:38.670328 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-d6hj2" event={"ID":"c7d243eb-5e31-4635-803d-2408fe9f8575","Type":"ContainerDied","Data":"3ea53b882c0d4708da03ff6debf88c4d4dc62d0642bf9df19aefd52d83cba5bd"} Dec 10 15:48:38 crc kubenswrapper[5114]: I1210 15:48:38.672248 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"6cf7c5f1-602e-434e-be18-95fffa160cbc","Type":"ContainerDied","Data":"e8d504102f6b2a138115468ea7f205204828a97ccf6cd508a2c3dc0db4e7a2cc"} Dec 10 15:48:38 crc kubenswrapper[5114]: I1210 15:48:38.672313 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8d504102f6b2a138115468ea7f205204828a97ccf6cd508a2c3dc0db4e7a2cc" Dec 10 15:48:38 crc kubenswrapper[5114]: I1210 15:48:38.672406 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.612240 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gzfvl" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.618771 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-d6hj2" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.642341 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55db6555fd-9mbmg"] Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.642975 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6cf7c5f1-602e-434e-be18-95fffa160cbc" containerName="pruner" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.642991 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cf7c5f1-602e-434e-be18-95fffa160cbc" containerName="pruner" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.643029 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c7d243eb-5e31-4635-803d-2408fe9f8575" containerName="controller-manager" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.643036 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7d243eb-5e31-4635-803d-2408fe9f8575" containerName="controller-manager" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.643046 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b73fffad-3220-4b21-9fd2-046191bf30ab" containerName="route-controller-manager" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.643054 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="b73fffad-3220-4b21-9fd2-046191bf30ab" containerName="route-controller-manager" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.643159 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="c7d243eb-5e31-4635-803d-2408fe9f8575" containerName="controller-manager" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.643172 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="b73fffad-3220-4b21-9fd2-046191bf30ab" containerName="route-controller-manager" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.643182 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="6cf7c5f1-602e-434e-be18-95fffa160cbc" containerName="pruner" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.646561 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55db6555fd-9mbmg" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.651909 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55db6555fd-9mbmg"] Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.663105 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds"] Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.668786 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.679159 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds"] Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.687210 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gzfvl" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.687390 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gzfvl" event={"ID":"b73fffad-3220-4b21-9fd2-046191bf30ab","Type":"ContainerDied","Data":"a0a65a9ab8f6515ffef8aaa54582cec2c7499ecf43ce62ae7a18230cc9acf293"} Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.687427 5114 scope.go:117] "RemoveContainer" containerID="b4342ae6d45ec4f4552726cda9c058e3420d34a7777c23f3badffae944b2406d" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.697371 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-d6hj2" event={"ID":"c7d243eb-5e31-4635-803d-2408fe9f8575","Type":"ContainerDied","Data":"8d7ed8ac600a8defdba6db94f80df4e27f5ec7a38884ab521bd7b33bbd48e196"} Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.697560 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-d6hj2" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.721311 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b73fffad-3220-4b21-9fd2-046191bf30ab-serving-cert\") pod \"b73fffad-3220-4b21-9fd2-046191bf30ab\" (UID: \"b73fffad-3220-4b21-9fd2-046191bf30ab\") " Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.721378 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7d243eb-5e31-4635-803d-2408fe9f8575-config\") pod \"c7d243eb-5e31-4635-803d-2408fe9f8575\" (UID: \"c7d243eb-5e31-4635-803d-2408fe9f8575\") " Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.721424 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b73fffad-3220-4b21-9fd2-046191bf30ab-config\") pod \"b73fffad-3220-4b21-9fd2-046191bf30ab\" (UID: \"b73fffad-3220-4b21-9fd2-046191bf30ab\") " Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.721440 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7d243eb-5e31-4635-803d-2408fe9f8575-client-ca\") pod \"c7d243eb-5e31-4635-803d-2408fe9f8575\" (UID: \"c7d243eb-5e31-4635-803d-2408fe9f8575\") " Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.721457 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7d243eb-5e31-4635-803d-2408fe9f8575-proxy-ca-bundles\") pod \"c7d243eb-5e31-4635-803d-2408fe9f8575\" (UID: \"c7d243eb-5e31-4635-803d-2408fe9f8575\") " Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.721473 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b73fffad-3220-4b21-9fd2-046191bf30ab-client-ca\") pod \"b73fffad-3220-4b21-9fd2-046191bf30ab\" (UID: \"b73fffad-3220-4b21-9fd2-046191bf30ab\") " Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.721504 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnghc\" (UniqueName: \"kubernetes.io/projected/c7d243eb-5e31-4635-803d-2408fe9f8575-kube-api-access-hnghc\") pod \"c7d243eb-5e31-4635-803d-2408fe9f8575\" (UID: \"c7d243eb-5e31-4635-803d-2408fe9f8575\") " Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.721542 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvblc\" (UniqueName: \"kubernetes.io/projected/b73fffad-3220-4b21-9fd2-046191bf30ab-kube-api-access-xvblc\") pod \"b73fffad-3220-4b21-9fd2-046191bf30ab\" (UID: \"b73fffad-3220-4b21-9fd2-046191bf30ab\") " Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.722414 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7d243eb-5e31-4635-803d-2408fe9f8575-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c7d243eb-5e31-4635-803d-2408fe9f8575" (UID: "c7d243eb-5e31-4635-803d-2408fe9f8575"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.722500 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b73fffad-3220-4b21-9fd2-046191bf30ab-config" (OuterVolumeSpecName: "config") pod "b73fffad-3220-4b21-9fd2-046191bf30ab" (UID: "b73fffad-3220-4b21-9fd2-046191bf30ab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.722535 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c7d243eb-5e31-4635-803d-2408fe9f8575-tmp\") pod \"c7d243eb-5e31-4635-803d-2408fe9f8575\" (UID: \"c7d243eb-5e31-4635-803d-2408fe9f8575\") " Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.722571 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b73fffad-3220-4b21-9fd2-046191bf30ab-tmp\") pod \"b73fffad-3220-4b21-9fd2-046191bf30ab\" (UID: \"b73fffad-3220-4b21-9fd2-046191bf30ab\") " Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.722610 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7d243eb-5e31-4635-803d-2408fe9f8575-serving-cert\") pod \"c7d243eb-5e31-4635-803d-2408fe9f8575\" (UID: \"c7d243eb-5e31-4635-803d-2408fe9f8575\") " Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.723123 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abc7b4be-eece-477e-8317-6eff6f579ca8-config\") pod \"route-controller-manager-55db6555fd-9mbmg\" (UID: \"abc7b4be-eece-477e-8317-6eff6f579ca8\") " pod="openshift-route-controller-manager/route-controller-manager-55db6555fd-9mbmg" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.723192 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abc7b4be-eece-477e-8317-6eff6f579ca8-client-ca\") pod \"route-controller-manager-55db6555fd-9mbmg\" (UID: \"abc7b4be-eece-477e-8317-6eff6f579ca8\") " pod="openshift-route-controller-manager/route-controller-manager-55db6555fd-9mbmg" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.723312 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abc7b4be-eece-477e-8317-6eff6f579ca8-serving-cert\") pod \"route-controller-manager-55db6555fd-9mbmg\" (UID: \"abc7b4be-eece-477e-8317-6eff6f579ca8\") " pod="openshift-route-controller-manager/route-controller-manager-55db6555fd-9mbmg" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.723386 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/abc7b4be-eece-477e-8317-6eff6f579ca8-tmp\") pod \"route-controller-manager-55db6555fd-9mbmg\" (UID: \"abc7b4be-eece-477e-8317-6eff6f579ca8\") " pod="openshift-route-controller-manager/route-controller-manager-55db6555fd-9mbmg" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.723405 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2blb\" (UniqueName: \"kubernetes.io/projected/abc7b4be-eece-477e-8317-6eff6f579ca8-kube-api-access-f2blb\") pod \"route-controller-manager-55db6555fd-9mbmg\" (UID: \"abc7b4be-eece-477e-8317-6eff6f579ca8\") " pod="openshift-route-controller-manager/route-controller-manager-55db6555fd-9mbmg" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.723553 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b73fffad-3220-4b21-9fd2-046191bf30ab-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.723565 5114 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7d243eb-5e31-4635-803d-2408fe9f8575-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.723857 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7d243eb-5e31-4635-803d-2408fe9f8575-tmp" (OuterVolumeSpecName: "tmp") pod "c7d243eb-5e31-4635-803d-2408fe9f8575" (UID: "c7d243eb-5e31-4635-803d-2408fe9f8575"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.724592 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b73fffad-3220-4b21-9fd2-046191bf30ab-tmp" (OuterVolumeSpecName: "tmp") pod "b73fffad-3220-4b21-9fd2-046191bf30ab" (UID: "b73fffad-3220-4b21-9fd2-046191bf30ab"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.724758 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b73fffad-3220-4b21-9fd2-046191bf30ab-client-ca" (OuterVolumeSpecName: "client-ca") pod "b73fffad-3220-4b21-9fd2-046191bf30ab" (UID: "b73fffad-3220-4b21-9fd2-046191bf30ab"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.725940 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7d243eb-5e31-4635-803d-2408fe9f8575-config" (OuterVolumeSpecName: "config") pod "c7d243eb-5e31-4635-803d-2408fe9f8575" (UID: "c7d243eb-5e31-4635-803d-2408fe9f8575"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.726167 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7d243eb-5e31-4635-803d-2408fe9f8575-client-ca" (OuterVolumeSpecName: "client-ca") pod "c7d243eb-5e31-4635-803d-2408fe9f8575" (UID: "c7d243eb-5e31-4635-803d-2408fe9f8575"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.728489 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b73fffad-3220-4b21-9fd2-046191bf30ab-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b73fffad-3220-4b21-9fd2-046191bf30ab" (UID: "b73fffad-3220-4b21-9fd2-046191bf30ab"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.729206 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7d243eb-5e31-4635-803d-2408fe9f8575-kube-api-access-hnghc" (OuterVolumeSpecName: "kube-api-access-hnghc") pod "c7d243eb-5e31-4635-803d-2408fe9f8575" (UID: "c7d243eb-5e31-4635-803d-2408fe9f8575"). InnerVolumeSpecName "kube-api-access-hnghc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.729328 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b73fffad-3220-4b21-9fd2-046191bf30ab-kube-api-access-xvblc" (OuterVolumeSpecName: "kube-api-access-xvblc") pod "b73fffad-3220-4b21-9fd2-046191bf30ab" (UID: "b73fffad-3220-4b21-9fd2-046191bf30ab"). InnerVolumeSpecName "kube-api-access-xvblc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.729288 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7d243eb-5e31-4635-803d-2408fe9f8575-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c7d243eb-5e31-4635-803d-2408fe9f8575" (UID: "c7d243eb-5e31-4635-803d-2408fe9f8575"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.824776 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abc7b4be-eece-477e-8317-6eff6f579ca8-config\") pod \"route-controller-manager-55db6555fd-9mbmg\" (UID: \"abc7b4be-eece-477e-8317-6eff6f579ca8\") " pod="openshift-route-controller-manager/route-controller-manager-55db6555fd-9mbmg" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.824827 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abc7b4be-eece-477e-8317-6eff6f579ca8-client-ca\") pod \"route-controller-manager-55db6555fd-9mbmg\" (UID: \"abc7b4be-eece-477e-8317-6eff6f579ca8\") " pod="openshift-route-controller-manager/route-controller-manager-55db6555fd-9mbmg" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.824875 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4mpv\" (UniqueName: \"kubernetes.io/projected/88db556e-cb86-4720-bd46-ee54074d5b7a-kube-api-access-t4mpv\") pod \"controller-manager-6bc87d94d7-5j9ds\" (UID: \"88db556e-cb86-4720-bd46-ee54074d5b7a\") " pod="openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.824903 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abc7b4be-eece-477e-8317-6eff6f579ca8-serving-cert\") pod \"route-controller-manager-55db6555fd-9mbmg\" (UID: \"abc7b4be-eece-477e-8317-6eff6f579ca8\") " pod="openshift-route-controller-manager/route-controller-manager-55db6555fd-9mbmg" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.824937 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/abc7b4be-eece-477e-8317-6eff6f579ca8-tmp\") pod \"route-controller-manager-55db6555fd-9mbmg\" (UID: \"abc7b4be-eece-477e-8317-6eff6f579ca8\") " pod="openshift-route-controller-manager/route-controller-manager-55db6555fd-9mbmg" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.824956 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f2blb\" (UniqueName: \"kubernetes.io/projected/abc7b4be-eece-477e-8317-6eff6f579ca8-kube-api-access-f2blb\") pod \"route-controller-manager-55db6555fd-9mbmg\" (UID: \"abc7b4be-eece-477e-8317-6eff6f579ca8\") " pod="openshift-route-controller-manager/route-controller-manager-55db6555fd-9mbmg" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.824980 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/88db556e-cb86-4720-bd46-ee54074d5b7a-tmp\") pod \"controller-manager-6bc87d94d7-5j9ds\" (UID: \"88db556e-cb86-4720-bd46-ee54074d5b7a\") " pod="openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.825003 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88db556e-cb86-4720-bd46-ee54074d5b7a-proxy-ca-bundles\") pod \"controller-manager-6bc87d94d7-5j9ds\" (UID: \"88db556e-cb86-4720-bd46-ee54074d5b7a\") " pod="openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.825047 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88db556e-cb86-4720-bd46-ee54074d5b7a-client-ca\") pod \"controller-manager-6bc87d94d7-5j9ds\" (UID: \"88db556e-cb86-4720-bd46-ee54074d5b7a\") " pod="openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.825089 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88db556e-cb86-4720-bd46-ee54074d5b7a-serving-cert\") pod \"controller-manager-6bc87d94d7-5j9ds\" (UID: \"88db556e-cb86-4720-bd46-ee54074d5b7a\") " pod="openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.825117 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88db556e-cb86-4720-bd46-ee54074d5b7a-config\") pod \"controller-manager-6bc87d94d7-5j9ds\" (UID: \"88db556e-cb86-4720-bd46-ee54074d5b7a\") " pod="openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.825171 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xvblc\" (UniqueName: \"kubernetes.io/projected/b73fffad-3220-4b21-9fd2-046191bf30ab-kube-api-access-xvblc\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.825184 5114 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c7d243eb-5e31-4635-803d-2408fe9f8575-tmp\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.825195 5114 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b73fffad-3220-4b21-9fd2-046191bf30ab-tmp\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.825206 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7d243eb-5e31-4635-803d-2408fe9f8575-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.825218 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b73fffad-3220-4b21-9fd2-046191bf30ab-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.825229 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7d243eb-5e31-4635-803d-2408fe9f8575-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.825239 5114 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7d243eb-5e31-4635-803d-2408fe9f8575-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.825249 5114 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b73fffad-3220-4b21-9fd2-046191bf30ab-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.825258 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hnghc\" (UniqueName: \"kubernetes.io/projected/c7d243eb-5e31-4635-803d-2408fe9f8575-kube-api-access-hnghc\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.826496 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abc7b4be-eece-477e-8317-6eff6f579ca8-config\") pod \"route-controller-manager-55db6555fd-9mbmg\" (UID: \"abc7b4be-eece-477e-8317-6eff6f579ca8\") " pod="openshift-route-controller-manager/route-controller-manager-55db6555fd-9mbmg" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.827135 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abc7b4be-eece-477e-8317-6eff6f579ca8-client-ca\") pod \"route-controller-manager-55db6555fd-9mbmg\" (UID: \"abc7b4be-eece-477e-8317-6eff6f579ca8\") " pod="openshift-route-controller-manager/route-controller-manager-55db6555fd-9mbmg" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.827786 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/abc7b4be-eece-477e-8317-6eff6f579ca8-tmp\") pod \"route-controller-manager-55db6555fd-9mbmg\" (UID: \"abc7b4be-eece-477e-8317-6eff6f579ca8\") " pod="openshift-route-controller-manager/route-controller-manager-55db6555fd-9mbmg" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.831010 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abc7b4be-eece-477e-8317-6eff6f579ca8-serving-cert\") pod \"route-controller-manager-55db6555fd-9mbmg\" (UID: \"abc7b4be-eece-477e-8317-6eff6f579ca8\") " pod="openshift-route-controller-manager/route-controller-manager-55db6555fd-9mbmg" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.842109 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2blb\" (UniqueName: \"kubernetes.io/projected/abc7b4be-eece-477e-8317-6eff6f579ca8-kube-api-access-f2blb\") pod \"route-controller-manager-55db6555fd-9mbmg\" (UID: \"abc7b4be-eece-477e-8317-6eff6f579ca8\") " pod="openshift-route-controller-manager/route-controller-manager-55db6555fd-9mbmg" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.926305 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88db556e-cb86-4720-bd46-ee54074d5b7a-serving-cert\") pod \"controller-manager-6bc87d94d7-5j9ds\" (UID: \"88db556e-cb86-4720-bd46-ee54074d5b7a\") " pod="openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.926365 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88db556e-cb86-4720-bd46-ee54074d5b7a-config\") pod \"controller-manager-6bc87d94d7-5j9ds\" (UID: \"88db556e-cb86-4720-bd46-ee54074d5b7a\") " pod="openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.926455 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t4mpv\" (UniqueName: \"kubernetes.io/projected/88db556e-cb86-4720-bd46-ee54074d5b7a-kube-api-access-t4mpv\") pod \"controller-manager-6bc87d94d7-5j9ds\" (UID: \"88db556e-cb86-4720-bd46-ee54074d5b7a\") " pod="openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.926499 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/88db556e-cb86-4720-bd46-ee54074d5b7a-tmp\") pod \"controller-manager-6bc87d94d7-5j9ds\" (UID: \"88db556e-cb86-4720-bd46-ee54074d5b7a\") " pod="openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.926520 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88db556e-cb86-4720-bd46-ee54074d5b7a-proxy-ca-bundles\") pod \"controller-manager-6bc87d94d7-5j9ds\" (UID: \"88db556e-cb86-4720-bd46-ee54074d5b7a\") " pod="openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.926561 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88db556e-cb86-4720-bd46-ee54074d5b7a-client-ca\") pod \"controller-manager-6bc87d94d7-5j9ds\" (UID: \"88db556e-cb86-4720-bd46-ee54074d5b7a\") " pod="openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.927934 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88db556e-cb86-4720-bd46-ee54074d5b7a-proxy-ca-bundles\") pod \"controller-manager-6bc87d94d7-5j9ds\" (UID: \"88db556e-cb86-4720-bd46-ee54074d5b7a\") " pod="openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.927999 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88db556e-cb86-4720-bd46-ee54074d5b7a-client-ca\") pod \"controller-manager-6bc87d94d7-5j9ds\" (UID: \"88db556e-cb86-4720-bd46-ee54074d5b7a\") " pod="openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.928069 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88db556e-cb86-4720-bd46-ee54074d5b7a-config\") pod \"controller-manager-6bc87d94d7-5j9ds\" (UID: \"88db556e-cb86-4720-bd46-ee54074d5b7a\") " pod="openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.928538 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/88db556e-cb86-4720-bd46-ee54074d5b7a-tmp\") pod \"controller-manager-6bc87d94d7-5j9ds\" (UID: \"88db556e-cb86-4720-bd46-ee54074d5b7a\") " pod="openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.932171 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88db556e-cb86-4720-bd46-ee54074d5b7a-serving-cert\") pod \"controller-manager-6bc87d94d7-5j9ds\" (UID: \"88db556e-cb86-4720-bd46-ee54074d5b7a\") " pod="openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.942514 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4mpv\" (UniqueName: \"kubernetes.io/projected/88db556e-cb86-4720-bd46-ee54074d5b7a-kube-api-access-t4mpv\") pod \"controller-manager-6bc87d94d7-5j9ds\" (UID: \"88db556e-cb86-4720-bd46-ee54074d5b7a\") " pod="openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds" Dec 10 15:48:39 crc kubenswrapper[5114]: I1210 15:48:39.988230 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55db6555fd-9mbmg" Dec 10 15:48:40 crc kubenswrapper[5114]: I1210 15:48:40.005995 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds" Dec 10 15:48:40 crc kubenswrapper[5114]: I1210 15:48:40.014387 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-gzfvl"] Dec 10 15:48:40 crc kubenswrapper[5114]: I1210 15:48:40.018064 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-gzfvl"] Dec 10 15:48:40 crc kubenswrapper[5114]: I1210 15:48:40.032384 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-d6hj2"] Dec 10 15:48:40 crc kubenswrapper[5114]: I1210 15:48:40.035927 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-d6hj2"] Dec 10 15:48:40 crc kubenswrapper[5114]: I1210 15:48:40.575592 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b73fffad-3220-4b21-9fd2-046191bf30ab" path="/var/lib/kubelet/pods/b73fffad-3220-4b21-9fd2-046191bf30ab/volumes" Dec 10 15:48:40 crc kubenswrapper[5114]: I1210 15:48:40.576531 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7d243eb-5e31-4635-803d-2408fe9f8575" path="/var/lib/kubelet/pods/c7d243eb-5e31-4635-803d-2408fe9f8575/volumes" Dec 10 15:48:41 crc kubenswrapper[5114]: E1210 15:48:41.183512 5114 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="26ab4365d23e44d43ce8e063def011e6a231777f68cdaa667f129a81a2d63e65" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 10 15:48:41 crc kubenswrapper[5114]: E1210 15:48:41.185357 5114 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="26ab4365d23e44d43ce8e063def011e6a231777f68cdaa667f129a81a2d63e65" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 10 15:48:41 crc kubenswrapper[5114]: E1210 15:48:41.186809 5114 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="26ab4365d23e44d43ce8e063def011e6a231777f68cdaa667f129a81a2d63e65" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 10 15:48:41 crc kubenswrapper[5114]: E1210 15:48:41.186860 5114 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-55xzh" podUID="4ff01055-87cd-4379-ba86-8778485be566" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 10 15:48:42 crc kubenswrapper[5114]: I1210 15:48:42.230111 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-7nbcs" Dec 10 15:48:43 crc kubenswrapper[5114]: I1210 15:48:43.594665 5114 ???:1] "http: TLS handshake error from 192.168.126.11:59060: no serving certificate available for the kubelet" Dec 10 15:48:45 crc kubenswrapper[5114]: E1210 15:48:45.213530 5114 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b6e28a6_b1a9_4942_8457_e54258393016.slice/crio-conmon-1b1a8fa0e80fd36fe13e3dd77a7af89a418a45139b9e394260c5c24cb90fde7c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b6e28a6_b1a9_4942_8457_e54258393016.slice/crio-1b1a8fa0e80fd36fe13e3dd77a7af89a418a45139b9e394260c5c24cb90fde7c.scope\": RecentStats: unable to find data in memory cache]" Dec 10 15:48:51 crc kubenswrapper[5114]: E1210 15:48:51.184689 5114 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="26ab4365d23e44d43ce8e063def011e6a231777f68cdaa667f129a81a2d63e65" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 10 15:48:51 crc kubenswrapper[5114]: E1210 15:48:51.187165 5114 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="26ab4365d23e44d43ce8e063def011e6a231777f68cdaa667f129a81a2d63e65" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 10 15:48:51 crc kubenswrapper[5114]: E1210 15:48:51.188694 5114 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="26ab4365d23e44d43ce8e063def011e6a231777f68cdaa667f129a81a2d63e65" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 10 15:48:51 crc kubenswrapper[5114]: E1210 15:48:51.188774 5114 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-55xzh" podUID="4ff01055-87cd-4379-ba86-8778485be566" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 10 15:48:52 crc kubenswrapper[5114]: I1210 15:48:52.631860 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:48:53 crc kubenswrapper[5114]: I1210 15:48:53.875724 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-lskwt" Dec 10 15:48:54 crc kubenswrapper[5114]: I1210 15:48:54.605816 5114 scope.go:117] "RemoveContainer" containerID="3ea53b882c0d4708da03ff6debf88c4d4dc62d0642bf9df19aefd52d83cba5bd" Dec 10 15:48:54 crc kubenswrapper[5114]: I1210 15:48:54.812851 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-55xzh_4ff01055-87cd-4379-ba86-8778485be566/kube-multus-additional-cni-plugins/0.log" Dec 10 15:48:54 crc kubenswrapper[5114]: I1210 15:48:54.813370 5114 generic.go:358] "Generic (PLEG): container finished" podID="4ff01055-87cd-4379-ba86-8778485be566" containerID="26ab4365d23e44d43ce8e063def011e6a231777f68cdaa667f129a81a2d63e65" exitCode=137 Dec 10 15:48:54 crc kubenswrapper[5114]: I1210 15:48:54.813495 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-55xzh" event={"ID":"4ff01055-87cd-4379-ba86-8778485be566","Type":"ContainerDied","Data":"26ab4365d23e44d43ce8e063def011e6a231777f68cdaa667f129a81a2d63e65"} Dec 10 15:48:55 crc kubenswrapper[5114]: E1210 15:48:55.326843 5114 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b6e28a6_b1a9_4942_8457_e54258393016.slice/crio-1b1a8fa0e80fd36fe13e3dd77a7af89a418a45139b9e394260c5c24cb90fde7c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b6e28a6_b1a9_4942_8457_e54258393016.slice/crio-conmon-1b1a8fa0e80fd36fe13e3dd77a7af89a418a45139b9e394260c5c24cb90fde7c.scope\": RecentStats: unable to find data in memory cache]" Dec 10 15:48:55 crc kubenswrapper[5114]: I1210 15:48:55.592186 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55db6555fd-9mbmg"] Dec 10 15:48:55 crc kubenswrapper[5114]: I1210 15:48:55.644356 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds"] Dec 10 15:48:55 crc kubenswrapper[5114]: W1210 15:48:55.646579 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88db556e_cb86_4720_bd46_ee54074d5b7a.slice/crio-208d0e23488dde85270efd91fe01bb17e78bb81488c4ad82256651767bc4f1a7 WatchSource:0}: Error finding container 208d0e23488dde85270efd91fe01bb17e78bb81488c4ad82256651767bc4f1a7: Status 404 returned error can't find the container with id 208d0e23488dde85270efd91fe01bb17e78bb81488c4ad82256651767bc4f1a7 Dec 10 15:48:55 crc kubenswrapper[5114]: I1210 15:48:55.819212 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55db6555fd-9mbmg" event={"ID":"abc7b4be-eece-477e-8317-6eff6f579ca8","Type":"ContainerStarted","Data":"3ca24e507e53146276de9fd18fc39cd53bc830f73b8a8d3629f2a54587aadd86"} Dec 10 15:48:55 crc kubenswrapper[5114]: I1210 15:48:55.820249 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds" event={"ID":"88db556e-cb86-4720-bd46-ee54074d5b7a","Type":"ContainerStarted","Data":"208d0e23488dde85270efd91fe01bb17e78bb81488c4ad82256651767bc4f1a7"} Dec 10 15:48:56 crc kubenswrapper[5114]: I1210 15:48:56.173649 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds"] Dec 10 15:48:56 crc kubenswrapper[5114]: I1210 15:48:56.196098 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55db6555fd-9mbmg"] Dec 10 15:48:56 crc kubenswrapper[5114]: I1210 15:48:56.899260 5114 pod_container_manager_linux.go:217] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod79e5de70-9480-4091-8467-73e7b3d12424"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod79e5de70-9480-4091-8467-73e7b3d12424] : Timed out while waiting for systemd to remove kubepods-burstable-pod79e5de70_9480_4091_8467_73e7b3d12424.slice" Dec 10 15:48:56 crc kubenswrapper[5114]: E1210 15:48:56.899326 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable pod79e5de70-9480-4091-8467-73e7b3d12424] : unable to destroy cgroup paths for cgroup [kubepods burstable pod79e5de70-9480-4091-8467-73e7b3d12424] : Timed out while waiting for systemd to remove kubepods-burstable-pod79e5de70_9480_4091_8467_73e7b3d12424.slice" pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-zw42q" podUID="79e5de70-9480-4091-8467-73e7b3d12424" Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.232739 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-55xzh_4ff01055-87cd-4379-ba86-8778485be566/kube-multus-additional-cni-plugins/0.log" Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.232815 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-55xzh" Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.360612 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/4ff01055-87cd-4379-ba86-8778485be566-ready\") pod \"4ff01055-87cd-4379-ba86-8778485be566\" (UID: \"4ff01055-87cd-4379-ba86-8778485be566\") " Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.360863 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwt84\" (UniqueName: \"kubernetes.io/projected/4ff01055-87cd-4379-ba86-8778485be566-kube-api-access-zwt84\") pod \"4ff01055-87cd-4379-ba86-8778485be566\" (UID: \"4ff01055-87cd-4379-ba86-8778485be566\") " Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.360885 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4ff01055-87cd-4379-ba86-8778485be566-cni-sysctl-allowlist\") pod \"4ff01055-87cd-4379-ba86-8778485be566\" (UID: \"4ff01055-87cd-4379-ba86-8778485be566\") " Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.360909 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4ff01055-87cd-4379-ba86-8778485be566-tuning-conf-dir\") pod \"4ff01055-87cd-4379-ba86-8778485be566\" (UID: \"4ff01055-87cd-4379-ba86-8778485be566\") " Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.361014 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ff01055-87cd-4379-ba86-8778485be566-ready" (OuterVolumeSpecName: "ready") pod "4ff01055-87cd-4379-ba86-8778485be566" (UID: "4ff01055-87cd-4379-ba86-8778485be566"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.361042 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ff01055-87cd-4379-ba86-8778485be566-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "4ff01055-87cd-4379-ba86-8778485be566" (UID: "4ff01055-87cd-4379-ba86-8778485be566"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.361120 5114 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4ff01055-87cd-4379-ba86-8778485be566-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.361168 5114 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/4ff01055-87cd-4379-ba86-8778485be566-ready\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.361576 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ff01055-87cd-4379-ba86-8778485be566-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "4ff01055-87cd-4379-ba86-8778485be566" (UID: "4ff01055-87cd-4379-ba86-8778485be566"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.367007 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ff01055-87cd-4379-ba86-8778485be566-kube-api-access-zwt84" (OuterVolumeSpecName: "kube-api-access-zwt84") pod "4ff01055-87cd-4379-ba86-8778485be566" (UID: "4ff01055-87cd-4379-ba86-8778485be566"). InnerVolumeSpecName "kube-api-access-zwt84". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.462251 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zwt84\" (UniqueName: \"kubernetes.io/projected/4ff01055-87cd-4379-ba86-8778485be566-kube-api-access-zwt84\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.462297 5114 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4ff01055-87cd-4379-ba86-8778485be566-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.837323 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9h94" event={"ID":"af5ea968-fe23-45bd-9ecd-8798399151e6","Type":"ContainerStarted","Data":"c2366ec02f86243c57d82eabc6b72016feebe73be160b68de4cdff4895790f69"} Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.840727 5114 generic.go:358] "Generic (PLEG): container finished" podID="949ddda2-62c3-484c-9034-3b447502cf4d" containerID="f06c4db4717815fb7a2e9b71612780d4381b4103e5f68f6af7dce6c98eeca4b7" exitCode=0 Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.840819 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gn7sf" event={"ID":"949ddda2-62c3-484c-9034-3b447502cf4d","Type":"ContainerDied","Data":"f06c4db4717815fb7a2e9b71612780d4381b4103e5f68f6af7dce6c98eeca4b7"} Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.843602 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55db6555fd-9mbmg" event={"ID":"abc7b4be-eece-477e-8317-6eff6f579ca8","Type":"ContainerStarted","Data":"ed034c8bfa179e8fd0e7df0a142a6444edbc9ad355732daa9585a071bbe35960"} Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.843748 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-55db6555fd-9mbmg" podUID="abc7b4be-eece-477e-8317-6eff6f579ca8" containerName="route-controller-manager" containerID="cri-o://ed034c8bfa179e8fd0e7df0a142a6444edbc9ad355732daa9585a071bbe35960" gracePeriod=30 Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.843887 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-55db6555fd-9mbmg" Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.852880 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-55db6555fd-9mbmg" Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.855611 5114 generic.go:358] "Generic (PLEG): container finished" podID="bc6eba38-9248-4153-acdb-87d7acc29df0" containerID="b4df279fd2545c1c47a574318f3545cf9c9a36241cdedcd8be16545ed9e273ed" exitCode=0 Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.855952 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfhws" event={"ID":"bc6eba38-9248-4153-acdb-87d7acc29df0","Type":"ContainerDied","Data":"b4df279fd2545c1c47a574318f3545cf9c9a36241cdedcd8be16545ed9e273ed"} Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.862433 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds" event={"ID":"88db556e-cb86-4720-bd46-ee54074d5b7a","Type":"ContainerStarted","Data":"9148aeae06668f10abd616e53f892141c9816baf4e980e2fde804f824cb120bd"} Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.862448 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds" podUID="88db556e-cb86-4720-bd46-ee54074d5b7a" containerName="controller-manager" containerID="cri-o://9148aeae06668f10abd616e53f892141c9816baf4e980e2fde804f824cb120bd" gracePeriod=30 Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.862788 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds" Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.866937 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-55xzh_4ff01055-87cd-4379-ba86-8778485be566/kube-multus-additional-cni-plugins/0.log" Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.867286 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-55xzh" Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.867554 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-55xzh" event={"ID":"4ff01055-87cd-4379-ba86-8778485be566","Type":"ContainerDied","Data":"cc27cd10ab01f98bcdad2ef48b6b965d99feaa7702e08123655eb76dd684f767"} Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.867622 5114 scope.go:117] "RemoveContainer" containerID="26ab4365d23e44d43ce8e063def011e6a231777f68cdaa667f129a81a2d63e65" Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.882377 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2zlq" event={"ID":"3c04642b-9dc3-4509-a6d8-b03df365d743","Type":"ContainerStarted","Data":"31691d0757be053df02438ff181c46a731060f5c946e1d3dfc8c142d0b202e26"} Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.889475 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qbbrv" event={"ID":"d38bc69a-988a-4bdc-9141-dc5d0019908e","Type":"ContainerDied","Data":"9f9d0bb4f7df47304c385f21703f40715f873d1bc775aabb5d36435ab4390599"} Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.890001 5114 generic.go:358] "Generic (PLEG): container finished" podID="d38bc69a-988a-4bdc-9141-dc5d0019908e" containerID="9f9d0bb4f7df47304c385f21703f40715f873d1bc775aabb5d36435ab4390599" exitCode=0 Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.909576 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-clgwg" event={"ID":"6568bc5a-ae55-48c0-b351-c5fbfafc3a6e","Type":"ContainerStarted","Data":"990b383fcb5773dc366e6508c76dcf7e3a8b0f2d95cfa74389333ab118dc48b4"} Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.912653 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds" podStartSLOduration=21.912629021 podStartE2EDuration="21.912629021s" podCreationTimestamp="2025-12-10 15:48:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:57.907333198 +0000 UTC m=+163.628134395" watchObservedRunningTime="2025-12-10 15:48:57.912629021 +0000 UTC m=+163.633430198" Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.914157 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-55db6555fd-9mbmg" podStartSLOduration=21.914140689 podStartE2EDuration="21.914140689s" podCreationTimestamp="2025-12-10 15:48:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:57.88711966 +0000 UTC m=+163.607920837" watchObservedRunningTime="2025-12-10 15:48:57.914140689 +0000 UTC m=+163.634941876" Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.916509 5114 generic.go:358] "Generic (PLEG): container finished" podID="270b074f-91f5-4ea6-b465-b0cc4a81f016" containerID="71cea54446a64c5cd4374c6dabf19bcf52529461307655d9e6ebc99c49754994" exitCode=0 Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.917001 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tkn7z" event={"ID":"270b074f-91f5-4ea6-b465-b0cc4a81f016","Type":"ContainerDied","Data":"71cea54446a64c5cd4374c6dabf19bcf52529461307655d9e6ebc99c49754994"} Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.932342 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-zw42q" Dec 10 15:48:57 crc kubenswrapper[5114]: I1210 15:48:57.933389 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dvt8r" event={"ID":"44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e","Type":"ContainerStarted","Data":"69bdf76ff651b4876de93bc9953cd33a6b1e092e9075806db91864059fbed73c"} Dec 10 15:48:58 crc kubenswrapper[5114]: I1210 15:48:58.096617 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-55xzh"] Dec 10 15:48:58 crc kubenswrapper[5114]: I1210 15:48:58.099696 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-55xzh"] Dec 10 15:48:58 crc kubenswrapper[5114]: I1210 15:48:58.308766 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55db6555fd-9mbmg" Dec 10 15:48:58 crc kubenswrapper[5114]: I1210 15:48:58.348862 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74b6b6789b-w8nsc"] Dec 10 15:48:58 crc kubenswrapper[5114]: I1210 15:48:58.349615 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="abc7b4be-eece-477e-8317-6eff6f579ca8" containerName="route-controller-manager" Dec 10 15:48:58 crc kubenswrapper[5114]: I1210 15:48:58.349638 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="abc7b4be-eece-477e-8317-6eff6f579ca8" containerName="route-controller-manager" Dec 10 15:48:58 crc kubenswrapper[5114]: I1210 15:48:58.349654 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4ff01055-87cd-4379-ba86-8778485be566" containerName="kube-multus-additional-cni-plugins" Dec 10 15:48:58 crc kubenswrapper[5114]: I1210 15:48:58.349663 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ff01055-87cd-4379-ba86-8778485be566" containerName="kube-multus-additional-cni-plugins" Dec 10 15:48:58 crc kubenswrapper[5114]: I1210 15:48:58.349799 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="4ff01055-87cd-4379-ba86-8778485be566" containerName="kube-multus-additional-cni-plugins" Dec 10 15:48:58 crc kubenswrapper[5114]: I1210 15:48:58.349830 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="abc7b4be-eece-477e-8317-6eff6f579ca8" containerName="route-controller-manager" Dec 10 15:48:58 crc kubenswrapper[5114]: I1210 15:48:58.377708 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abc7b4be-eece-477e-8317-6eff6f579ca8-serving-cert\") pod \"abc7b4be-eece-477e-8317-6eff6f579ca8\" (UID: \"abc7b4be-eece-477e-8317-6eff6f579ca8\") " Dec 10 15:48:58 crc kubenswrapper[5114]: I1210 15:48:58.377907 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/abc7b4be-eece-477e-8317-6eff6f579ca8-tmp\") pod \"abc7b4be-eece-477e-8317-6eff6f579ca8\" (UID: \"abc7b4be-eece-477e-8317-6eff6f579ca8\") " Dec 10 15:48:58 crc kubenswrapper[5114]: I1210 15:48:58.377983 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abc7b4be-eece-477e-8317-6eff6f579ca8-config\") pod \"abc7b4be-eece-477e-8317-6eff6f579ca8\" (UID: \"abc7b4be-eece-477e-8317-6eff6f579ca8\") " Dec 10 15:48:58 crc kubenswrapper[5114]: I1210 15:48:58.378016 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2blb\" (UniqueName: \"kubernetes.io/projected/abc7b4be-eece-477e-8317-6eff6f579ca8-kube-api-access-f2blb\") pod \"abc7b4be-eece-477e-8317-6eff6f579ca8\" (UID: \"abc7b4be-eece-477e-8317-6eff6f579ca8\") " Dec 10 15:48:58 crc kubenswrapper[5114]: I1210 15:48:58.378052 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abc7b4be-eece-477e-8317-6eff6f579ca8-client-ca\") pod \"abc7b4be-eece-477e-8317-6eff6f579ca8\" (UID: \"abc7b4be-eece-477e-8317-6eff6f579ca8\") " Dec 10 15:48:58 crc kubenswrapper[5114]: I1210 15:48:58.378359 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abc7b4be-eece-477e-8317-6eff6f579ca8-tmp" (OuterVolumeSpecName: "tmp") pod "abc7b4be-eece-477e-8317-6eff6f579ca8" (UID: "abc7b4be-eece-477e-8317-6eff6f579ca8"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:48:58 crc kubenswrapper[5114]: I1210 15:48:58.378790 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abc7b4be-eece-477e-8317-6eff6f579ca8-config" (OuterVolumeSpecName: "config") pod "abc7b4be-eece-477e-8317-6eff6f579ca8" (UID: "abc7b4be-eece-477e-8317-6eff6f579ca8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:48:58 crc kubenswrapper[5114]: I1210 15:48:58.378864 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abc7b4be-eece-477e-8317-6eff6f579ca8-client-ca" (OuterVolumeSpecName: "client-ca") pod "abc7b4be-eece-477e-8317-6eff6f579ca8" (UID: "abc7b4be-eece-477e-8317-6eff6f579ca8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:48:58 crc kubenswrapper[5114]: I1210 15:48:58.387439 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abc7b4be-eece-477e-8317-6eff6f579ca8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "abc7b4be-eece-477e-8317-6eff6f579ca8" (UID: "abc7b4be-eece-477e-8317-6eff6f579ca8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:48:58 crc kubenswrapper[5114]: I1210 15:48:58.397674 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abc7b4be-eece-477e-8317-6eff6f579ca8-kube-api-access-f2blb" (OuterVolumeSpecName: "kube-api-access-f2blb") pod "abc7b4be-eece-477e-8317-6eff6f579ca8" (UID: "abc7b4be-eece-477e-8317-6eff6f579ca8"). InnerVolumeSpecName "kube-api-access-f2blb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:48:58 crc kubenswrapper[5114]: I1210 15:48:58.479698 5114 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/abc7b4be-eece-477e-8317-6eff6f579ca8-tmp\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:58 crc kubenswrapper[5114]: I1210 15:48:58.479725 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abc7b4be-eece-477e-8317-6eff6f579ca8-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:58 crc kubenswrapper[5114]: I1210 15:48:58.479736 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f2blb\" (UniqueName: \"kubernetes.io/projected/abc7b4be-eece-477e-8317-6eff6f579ca8-kube-api-access-f2blb\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:58 crc kubenswrapper[5114]: I1210 15:48:58.479746 5114 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abc7b4be-eece-477e-8317-6eff6f579ca8-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:58 crc kubenswrapper[5114]: I1210 15:48:58.479754 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abc7b4be-eece-477e-8317-6eff6f579ca8-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:58 crc kubenswrapper[5114]: I1210 15:48:58.552717 5114 patch_prober.go:28] interesting pod/controller-manager-6bc87d94d7-5j9ds container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": read tcp 10.217.0.2:42988->10.217.0.56:8443: read: connection reset by peer" start-of-body= Dec 10 15:48:58 crc kubenswrapper[5114]: I1210 15:48:58.552978 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds" podUID="88db556e-cb86-4720-bd46-ee54074d5b7a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": read tcp 10.217.0.2:42988->10.217.0.56:8443: read: connection reset by peer" Dec 10 15:48:58 crc kubenswrapper[5114]: I1210 15:48:58.940819 5114 generic.go:358] "Generic (PLEG): container finished" podID="3c04642b-9dc3-4509-a6d8-b03df365d743" containerID="31691d0757be053df02438ff181c46a731060f5c946e1d3dfc8c142d0b202e26" exitCode=0 Dec 10 15:48:58 crc kubenswrapper[5114]: I1210 15:48:58.944405 5114 generic.go:358] "Generic (PLEG): container finished" podID="6568bc5a-ae55-48c0-b351-c5fbfafc3a6e" containerID="990b383fcb5773dc366e6508c76dcf7e3a8b0f2d95cfa74389333ab118dc48b4" exitCode=0 Dec 10 15:48:58 crc kubenswrapper[5114]: I1210 15:48:58.946488 5114 generic.go:358] "Generic (PLEG): container finished" podID="44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e" containerID="69bdf76ff651b4876de93bc9953cd33a6b1e092e9075806db91864059fbed73c" exitCode=0 Dec 10 15:48:58 crc kubenswrapper[5114]: I1210 15:48:58.950694 5114 generic.go:358] "Generic (PLEG): container finished" podID="abc7b4be-eece-477e-8317-6eff6f579ca8" containerID="ed034c8bfa179e8fd0e7df0a142a6444edbc9ad355732daa9585a071bbe35960" exitCode=0 Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.714811 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.714850 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2zlq" event={"ID":"3c04642b-9dc3-4509-a6d8-b03df365d743","Type":"ContainerDied","Data":"31691d0757be053df02438ff181c46a731060f5c946e1d3dfc8c142d0b202e26"} Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.714883 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74b6b6789b-w8nsc"] Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.714903 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55db6555fd-9mbmg" Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.714932 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qbbrv" event={"ID":"d38bc69a-988a-4bdc-9141-dc5d0019908e","Type":"ContainerStarted","Data":"7b380093df269184a5d507bf3ce2b5a7b55c76aef79ba3587135ced20a3040c1"} Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.715654 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74b6b6789b-w8nsc" Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.721081 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.722119 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.722317 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.723129 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.723138 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.723198 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.726561 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ff01055-87cd-4379-ba86-8778485be566" path="/var/lib/kubelet/pods/4ff01055-87cd-4379-ba86-8778485be566/volumes" Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.727523 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-clgwg" event={"ID":"6568bc5a-ae55-48c0-b351-c5fbfafc3a6e","Type":"ContainerDied","Data":"990b383fcb5773dc366e6508c76dcf7e3a8b0f2d95cfa74389333ab118dc48b4"} Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.727561 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dvt8r" event={"ID":"44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e","Type":"ContainerDied","Data":"69bdf76ff651b4876de93bc9953cd33a6b1e092e9075806db91864059fbed73c"} Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.727575 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dvt8r" event={"ID":"44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e","Type":"ContainerStarted","Data":"7ab6129eb467cac14838cf1b07a911fd7e1837ce57b1eae2f8c09e39f3c3132f"} Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.727589 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gn7sf" event={"ID":"949ddda2-62c3-484c-9034-3b447502cf4d","Type":"ContainerStarted","Data":"d70786639c1b7bd477f3fbd28fce6f38be15e98017d624b1c40a7fd7e3248d8e"} Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.727603 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55db6555fd-9mbmg" event={"ID":"abc7b4be-eece-477e-8317-6eff6f579ca8","Type":"ContainerDied","Data":"ed034c8bfa179e8fd0e7df0a142a6444edbc9ad355732daa9585a071bbe35960"} Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.727617 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55db6555fd-9mbmg" event={"ID":"abc7b4be-eece-477e-8317-6eff6f579ca8","Type":"ContainerDied","Data":"3ca24e507e53146276de9fd18fc39cd53bc830f73b8a8d3629f2a54587aadd86"} Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.727641 5114 scope.go:117] "RemoveContainer" containerID="ed034c8bfa179e8fd0e7df0a142a6444edbc9ad355732daa9585a071bbe35960" Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.744499 5114 scope.go:117] "RemoveContainer" containerID="ed034c8bfa179e8fd0e7df0a142a6444edbc9ad355732daa9585a071bbe35960" Dec 10 15:48:59 crc kubenswrapper[5114]: E1210 15:48:59.745169 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed034c8bfa179e8fd0e7df0a142a6444edbc9ad355732daa9585a071bbe35960\": container with ID starting with ed034c8bfa179e8fd0e7df0a142a6444edbc9ad355732daa9585a071bbe35960 not found: ID does not exist" containerID="ed034c8bfa179e8fd0e7df0a142a6444edbc9ad355732daa9585a071bbe35960" Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.745212 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed034c8bfa179e8fd0e7df0a142a6444edbc9ad355732daa9585a071bbe35960"} err="failed to get container status \"ed034c8bfa179e8fd0e7df0a142a6444edbc9ad355732daa9585a071bbe35960\": rpc error: code = NotFound desc = could not find container \"ed034c8bfa179e8fd0e7df0a142a6444edbc9ad355732daa9585a071bbe35960\": container with ID starting with ed034c8bfa179e8fd0e7df0a142a6444edbc9ad355732daa9585a071bbe35960 not found: ID does not exist" Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.798336 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67f931f3-1c81-4f43-b301-12f4a95b4e0d-client-ca\") pod \"route-controller-manager-74b6b6789b-w8nsc\" (UID: \"67f931f3-1c81-4f43-b301-12f4a95b4e0d\") " pod="openshift-route-controller-manager/route-controller-manager-74b6b6789b-w8nsc" Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.798423 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s5g2\" (UniqueName: \"kubernetes.io/projected/67f931f3-1c81-4f43-b301-12f4a95b4e0d-kube-api-access-4s5g2\") pod \"route-controller-manager-74b6b6789b-w8nsc\" (UID: \"67f931f3-1c81-4f43-b301-12f4a95b4e0d\") " pod="openshift-route-controller-manager/route-controller-manager-74b6b6789b-w8nsc" Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.798456 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/67f931f3-1c81-4f43-b301-12f4a95b4e0d-tmp\") pod \"route-controller-manager-74b6b6789b-w8nsc\" (UID: \"67f931f3-1c81-4f43-b301-12f4a95b4e0d\") " pod="openshift-route-controller-manager/route-controller-manager-74b6b6789b-w8nsc" Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.798502 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67f931f3-1c81-4f43-b301-12f4a95b4e0d-config\") pod \"route-controller-manager-74b6b6789b-w8nsc\" (UID: \"67f931f3-1c81-4f43-b301-12f4a95b4e0d\") " pod="openshift-route-controller-manager/route-controller-manager-74b6b6789b-w8nsc" Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.798545 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67f931f3-1c81-4f43-b301-12f4a95b4e0d-serving-cert\") pod \"route-controller-manager-74b6b6789b-w8nsc\" (UID: \"67f931f3-1c81-4f43-b301-12f4a95b4e0d\") " pod="openshift-route-controller-manager/route-controller-manager-74b6b6789b-w8nsc" Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.841357 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55db6555fd-9mbmg"] Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.845880 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55db6555fd-9mbmg"] Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.899908 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67f931f3-1c81-4f43-b301-12f4a95b4e0d-serving-cert\") pod \"route-controller-manager-74b6b6789b-w8nsc\" (UID: \"67f931f3-1c81-4f43-b301-12f4a95b4e0d\") " pod="openshift-route-controller-manager/route-controller-manager-74b6b6789b-w8nsc" Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.899998 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67f931f3-1c81-4f43-b301-12f4a95b4e0d-client-ca\") pod \"route-controller-manager-74b6b6789b-w8nsc\" (UID: \"67f931f3-1c81-4f43-b301-12f4a95b4e0d\") " pod="openshift-route-controller-manager/route-controller-manager-74b6b6789b-w8nsc" Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.900033 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4s5g2\" (UniqueName: \"kubernetes.io/projected/67f931f3-1c81-4f43-b301-12f4a95b4e0d-kube-api-access-4s5g2\") pod \"route-controller-manager-74b6b6789b-w8nsc\" (UID: \"67f931f3-1c81-4f43-b301-12f4a95b4e0d\") " pod="openshift-route-controller-manager/route-controller-manager-74b6b6789b-w8nsc" Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.900054 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/67f931f3-1c81-4f43-b301-12f4a95b4e0d-tmp\") pod \"route-controller-manager-74b6b6789b-w8nsc\" (UID: \"67f931f3-1c81-4f43-b301-12f4a95b4e0d\") " pod="openshift-route-controller-manager/route-controller-manager-74b6b6789b-w8nsc" Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.900073 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67f931f3-1c81-4f43-b301-12f4a95b4e0d-config\") pod \"route-controller-manager-74b6b6789b-w8nsc\" (UID: \"67f931f3-1c81-4f43-b301-12f4a95b4e0d\") " pod="openshift-route-controller-manager/route-controller-manager-74b6b6789b-w8nsc" Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.901700 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67f931f3-1c81-4f43-b301-12f4a95b4e0d-config\") pod \"route-controller-manager-74b6b6789b-w8nsc\" (UID: \"67f931f3-1c81-4f43-b301-12f4a95b4e0d\") " pod="openshift-route-controller-manager/route-controller-manager-74b6b6789b-w8nsc" Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.903166 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/67f931f3-1c81-4f43-b301-12f4a95b4e0d-tmp\") pod \"route-controller-manager-74b6b6789b-w8nsc\" (UID: \"67f931f3-1c81-4f43-b301-12f4a95b4e0d\") " pod="openshift-route-controller-manager/route-controller-manager-74b6b6789b-w8nsc" Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.903526 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67f931f3-1c81-4f43-b301-12f4a95b4e0d-client-ca\") pod \"route-controller-manager-74b6b6789b-w8nsc\" (UID: \"67f931f3-1c81-4f43-b301-12f4a95b4e0d\") " pod="openshift-route-controller-manager/route-controller-manager-74b6b6789b-w8nsc" Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.919950 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67f931f3-1c81-4f43-b301-12f4a95b4e0d-serving-cert\") pod \"route-controller-manager-74b6b6789b-w8nsc\" (UID: \"67f931f3-1c81-4f43-b301-12f4a95b4e0d\") " pod="openshift-route-controller-manager/route-controller-manager-74b6b6789b-w8nsc" Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.930057 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s5g2\" (UniqueName: \"kubernetes.io/projected/67f931f3-1c81-4f43-b301-12f4a95b4e0d-kube-api-access-4s5g2\") pod \"route-controller-manager-74b6b6789b-w8nsc\" (UID: \"67f931f3-1c81-4f43-b301-12f4a95b4e0d\") " pod="openshift-route-controller-manager/route-controller-manager-74b6b6789b-w8nsc" Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.962462 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tkn7z" event={"ID":"270b074f-91f5-4ea6-b465-b0cc4a81f016","Type":"ContainerStarted","Data":"9d856bdfb3026bdfb1ec8a7131216cb65da0e74eae51043ba13fccd25687348f"} Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.975127 5114 generic.go:358] "Generic (PLEG): container finished" podID="af5ea968-fe23-45bd-9ecd-8798399151e6" containerID="c2366ec02f86243c57d82eabc6b72016feebe73be160b68de4cdff4895790f69" exitCode=0 Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.975326 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9h94" event={"ID":"af5ea968-fe23-45bd-9ecd-8798399151e6","Type":"ContainerDied","Data":"c2366ec02f86243c57d82eabc6b72016feebe73be160b68de4cdff4895790f69"} Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.989331 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfhws" event={"ID":"bc6eba38-9248-4153-acdb-87d7acc29df0","Type":"ContainerStarted","Data":"0bc2e3806d3c801e7d69d340c041bbf37740b51e4ced20cd717e57cb7582f157"} Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.990176 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tkn7z" podStartSLOduration=8.068314183 podStartE2EDuration="34.99015216s" podCreationTimestamp="2025-12-10 15:48:25 +0000 UTC" firstStartedPulling="2025-12-10 15:48:28.508326116 +0000 UTC m=+134.229127293" lastFinishedPulling="2025-12-10 15:48:55.430164093 +0000 UTC m=+161.150965270" observedRunningTime="2025-12-10 15:48:59.989451452 +0000 UTC m=+165.710252629" watchObservedRunningTime="2025-12-10 15:48:59.99015216 +0000 UTC m=+165.710953337" Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.991384 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-6bc87d94d7-5j9ds_88db556e-cb86-4720-bd46-ee54074d5b7a/controller-manager/0.log" Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.991515 5114 generic.go:358] "Generic (PLEG): container finished" podID="88db556e-cb86-4720-bd46-ee54074d5b7a" containerID="9148aeae06668f10abd616e53f892141c9816baf4e980e2fde804f824cb120bd" exitCode=255 Dec 10 15:48:59 crc kubenswrapper[5114]: I1210 15:48:59.991607 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds" event={"ID":"88db556e-cb86-4720-bd46-ee54074d5b7a","Type":"ContainerDied","Data":"9148aeae06668f10abd616e53f892141c9816baf4e980e2fde804f824cb120bd"} Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.034121 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74b6b6789b-w8nsc" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.063799 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-6bc87d94d7-5j9ds_88db556e-cb86-4720-bd46-ee54074d5b7a/controller-manager/0.log" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.063890 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.102478 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6df9c98778-pwhd4"] Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.103026 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="88db556e-cb86-4720-bd46-ee54074d5b7a" containerName="controller-manager" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.103043 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="88db556e-cb86-4720-bd46-ee54074d5b7a" containerName="controller-manager" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.103130 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="88db556e-cb86-4720-bd46-ee54074d5b7a" containerName="controller-manager" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.103566 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88db556e-cb86-4720-bd46-ee54074d5b7a-proxy-ca-bundles\") pod \"88db556e-cb86-4720-bd46-ee54074d5b7a\" (UID: \"88db556e-cb86-4720-bd46-ee54074d5b7a\") " Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.103627 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/88db556e-cb86-4720-bd46-ee54074d5b7a-tmp\") pod \"88db556e-cb86-4720-bd46-ee54074d5b7a\" (UID: \"88db556e-cb86-4720-bd46-ee54074d5b7a\") " Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.103659 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4mpv\" (UniqueName: \"kubernetes.io/projected/88db556e-cb86-4720-bd46-ee54074d5b7a-kube-api-access-t4mpv\") pod \"88db556e-cb86-4720-bd46-ee54074d5b7a\" (UID: \"88db556e-cb86-4720-bd46-ee54074d5b7a\") " Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.103740 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88db556e-cb86-4720-bd46-ee54074d5b7a-config\") pod \"88db556e-cb86-4720-bd46-ee54074d5b7a\" (UID: \"88db556e-cb86-4720-bd46-ee54074d5b7a\") " Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.103792 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88db556e-cb86-4720-bd46-ee54074d5b7a-client-ca\") pod \"88db556e-cb86-4720-bd46-ee54074d5b7a\" (UID: \"88db556e-cb86-4720-bd46-ee54074d5b7a\") " Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.103821 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88db556e-cb86-4720-bd46-ee54074d5b7a-serving-cert\") pod \"88db556e-cb86-4720-bd46-ee54074d5b7a\" (UID: \"88db556e-cb86-4720-bd46-ee54074d5b7a\") " Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.104643 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88db556e-cb86-4720-bd46-ee54074d5b7a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "88db556e-cb86-4720-bd46-ee54074d5b7a" (UID: "88db556e-cb86-4720-bd46-ee54074d5b7a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.104790 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88db556e-cb86-4720-bd46-ee54074d5b7a-tmp" (OuterVolumeSpecName: "tmp") pod "88db556e-cb86-4720-bd46-ee54074d5b7a" (UID: "88db556e-cb86-4720-bd46-ee54074d5b7a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.105092 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88db556e-cb86-4720-bd46-ee54074d5b7a-client-ca" (OuterVolumeSpecName: "client-ca") pod "88db556e-cb86-4720-bd46-ee54074d5b7a" (UID: "88db556e-cb86-4720-bd46-ee54074d5b7a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.105292 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88db556e-cb86-4720-bd46-ee54074d5b7a-config" (OuterVolumeSpecName: "config") pod "88db556e-cb86-4720-bd46-ee54074d5b7a" (UID: "88db556e-cb86-4720-bd46-ee54074d5b7a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.109425 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88db556e-cb86-4720-bd46-ee54074d5b7a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "88db556e-cb86-4720-bd46-ee54074d5b7a" (UID: "88db556e-cb86-4720-bd46-ee54074d5b7a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.114637 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88db556e-cb86-4720-bd46-ee54074d5b7a-kube-api-access-t4mpv" (OuterVolumeSpecName: "kube-api-access-t4mpv") pod "88db556e-cb86-4720-bd46-ee54074d5b7a" (UID: "88db556e-cb86-4720-bd46-ee54074d5b7a"). InnerVolumeSpecName "kube-api-access-t4mpv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.205061 5114 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88db556e-cb86-4720-bd46-ee54074d5b7a-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.205103 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88db556e-cb86-4720-bd46-ee54074d5b7a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.205112 5114 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88db556e-cb86-4720-bd46-ee54074d5b7a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.205123 5114 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/88db556e-cb86-4720-bd46-ee54074d5b7a-tmp\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.205134 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t4mpv\" (UniqueName: \"kubernetes.io/projected/88db556e-cb86-4720-bd46-ee54074d5b7a-kube-api-access-t4mpv\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.205142 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88db556e-cb86-4720-bd46-ee54074d5b7a-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.213999 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6df9c98778-pwhd4"] Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.214204 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6df9c98778-pwhd4" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.297053 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dvt8r" podStartSLOduration=10.103920475 podStartE2EDuration="37.297040734s" podCreationTimestamp="2025-12-10 15:48:23 +0000 UTC" firstStartedPulling="2025-12-10 15:48:27.412678054 +0000 UTC m=+133.133479231" lastFinishedPulling="2025-12-10 15:48:54.605798293 +0000 UTC m=+160.326599490" observedRunningTime="2025-12-10 15:49:00.295738501 +0000 UTC m=+166.016539678" watchObservedRunningTime="2025-12-10 15:49:00.297040734 +0000 UTC m=+166.017841901" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.307824 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-proxy-ca-bundles\") pod \"controller-manager-6df9c98778-pwhd4\" (UID: \"02c2b89c-3f6c-4a2e-98c3-beaf70f198c8\") " pod="openshift-controller-manager/controller-manager-6df9c98778-pwhd4" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.307895 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-client-ca\") pod \"controller-manager-6df9c98778-pwhd4\" (UID: \"02c2b89c-3f6c-4a2e-98c3-beaf70f198c8\") " pod="openshift-controller-manager/controller-manager-6df9c98778-pwhd4" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.307924 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-tmp\") pod \"controller-manager-6df9c98778-pwhd4\" (UID: \"02c2b89c-3f6c-4a2e-98c3-beaf70f198c8\") " pod="openshift-controller-manager/controller-manager-6df9c98778-pwhd4" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.307954 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-serving-cert\") pod \"controller-manager-6df9c98778-pwhd4\" (UID: \"02c2b89c-3f6c-4a2e-98c3-beaf70f198c8\") " pod="openshift-controller-manager/controller-manager-6df9c98778-pwhd4" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.308017 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-config\") pod \"controller-manager-6df9c98778-pwhd4\" (UID: \"02c2b89c-3f6c-4a2e-98c3-beaf70f198c8\") " pod="openshift-controller-manager/controller-manager-6df9c98778-pwhd4" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.308059 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d8hw\" (UniqueName: \"kubernetes.io/projected/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-kube-api-access-7d8hw\") pod \"controller-manager-6df9c98778-pwhd4\" (UID: \"02c2b89c-3f6c-4a2e-98c3-beaf70f198c8\") " pod="openshift-controller-manager/controller-manager-6df9c98778-pwhd4" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.352427 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gn7sf" podStartSLOduration=10.180022739 podStartE2EDuration="37.352407905s" podCreationTimestamp="2025-12-10 15:48:23 +0000 UTC" firstStartedPulling="2025-12-10 15:48:27.435612053 +0000 UTC m=+133.156413230" lastFinishedPulling="2025-12-10 15:48:54.607997219 +0000 UTC m=+160.328798396" observedRunningTime="2025-12-10 15:49:00.35141913 +0000 UTC m=+166.072220327" watchObservedRunningTime="2025-12-10 15:49:00.352407905 +0000 UTC m=+166.073209082" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.354036 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lfhws" podStartSLOduration=10.112526466 podStartE2EDuration="37.354026766s" podCreationTimestamp="2025-12-10 15:48:23 +0000 UTC" firstStartedPulling="2025-12-10 15:48:27.367915894 +0000 UTC m=+133.088717071" lastFinishedPulling="2025-12-10 15:48:54.609416194 +0000 UTC m=+160.330217371" observedRunningTime="2025-12-10 15:49:00.324307389 +0000 UTC m=+166.045108566" watchObservedRunningTime="2025-12-10 15:49:00.354026766 +0000 UTC m=+166.074827943" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.376976 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qbbrv" podStartSLOduration=9.253836468 podStartE2EDuration="35.376955722s" podCreationTimestamp="2025-12-10 15:48:25 +0000 UTC" firstStartedPulling="2025-12-10 15:48:28.48629453 +0000 UTC m=+134.207095707" lastFinishedPulling="2025-12-10 15:48:54.609413774 +0000 UTC m=+160.330214961" observedRunningTime="2025-12-10 15:49:00.372723856 +0000 UTC m=+166.093525033" watchObservedRunningTime="2025-12-10 15:49:00.376955722 +0000 UTC m=+166.097756899" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.408931 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-client-ca\") pod \"controller-manager-6df9c98778-pwhd4\" (UID: \"02c2b89c-3f6c-4a2e-98c3-beaf70f198c8\") " pod="openshift-controller-manager/controller-manager-6df9c98778-pwhd4" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.409013 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-tmp\") pod \"controller-manager-6df9c98778-pwhd4\" (UID: \"02c2b89c-3f6c-4a2e-98c3-beaf70f198c8\") " pod="openshift-controller-manager/controller-manager-6df9c98778-pwhd4" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.409041 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-serving-cert\") pod \"controller-manager-6df9c98778-pwhd4\" (UID: \"02c2b89c-3f6c-4a2e-98c3-beaf70f198c8\") " pod="openshift-controller-manager/controller-manager-6df9c98778-pwhd4" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.409089 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-config\") pod \"controller-manager-6df9c98778-pwhd4\" (UID: \"02c2b89c-3f6c-4a2e-98c3-beaf70f198c8\") " pod="openshift-controller-manager/controller-manager-6df9c98778-pwhd4" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.409119 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7d8hw\" (UniqueName: \"kubernetes.io/projected/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-kube-api-access-7d8hw\") pod \"controller-manager-6df9c98778-pwhd4\" (UID: \"02c2b89c-3f6c-4a2e-98c3-beaf70f198c8\") " pod="openshift-controller-manager/controller-manager-6df9c98778-pwhd4" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.409149 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-proxy-ca-bundles\") pod \"controller-manager-6df9c98778-pwhd4\" (UID: \"02c2b89c-3f6c-4a2e-98c3-beaf70f198c8\") " pod="openshift-controller-manager/controller-manager-6df9c98778-pwhd4" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.410402 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-proxy-ca-bundles\") pod \"controller-manager-6df9c98778-pwhd4\" (UID: \"02c2b89c-3f6c-4a2e-98c3-beaf70f198c8\") " pod="openshift-controller-manager/controller-manager-6df9c98778-pwhd4" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.411290 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-client-ca\") pod \"controller-manager-6df9c98778-pwhd4\" (UID: \"02c2b89c-3f6c-4a2e-98c3-beaf70f198c8\") " pod="openshift-controller-manager/controller-manager-6df9c98778-pwhd4" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.411601 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-tmp\") pod \"controller-manager-6df9c98778-pwhd4\" (UID: \"02c2b89c-3f6c-4a2e-98c3-beaf70f198c8\") " pod="openshift-controller-manager/controller-manager-6df9c98778-pwhd4" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.411952 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-config\") pod \"controller-manager-6df9c98778-pwhd4\" (UID: \"02c2b89c-3f6c-4a2e-98c3-beaf70f198c8\") " pod="openshift-controller-manager/controller-manager-6df9c98778-pwhd4" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.416958 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-serving-cert\") pod \"controller-manager-6df9c98778-pwhd4\" (UID: \"02c2b89c-3f6c-4a2e-98c3-beaf70f198c8\") " pod="openshift-controller-manager/controller-manager-6df9c98778-pwhd4" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.430430 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d8hw\" (UniqueName: \"kubernetes.io/projected/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-kube-api-access-7d8hw\") pod \"controller-manager-6df9c98778-pwhd4\" (UID: \"02c2b89c-3f6c-4a2e-98c3-beaf70f198c8\") " pod="openshift-controller-manager/controller-manager-6df9c98778-pwhd4" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.687634 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6df9c98778-pwhd4" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.694127 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abc7b4be-eece-477e-8317-6eff6f579ca8" path="/var/lib/kubelet/pods/abc7b4be-eece-477e-8317-6eff6f579ca8/volumes" Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.743878 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74b6b6789b-w8nsc"] Dec 10 15:49:00 crc kubenswrapper[5114]: W1210 15:49:00.760394 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67f931f3_1c81_4f43_b301_12f4a95b4e0d.slice/crio-8e2969fe35dd23a613cf8ccbb1f9a162fdd15b3396ffb363aab3218691792b3e WatchSource:0}: Error finding container 8e2969fe35dd23a613cf8ccbb1f9a162fdd15b3396ffb363aab3218691792b3e: Status 404 returned error can't find the container with id 8e2969fe35dd23a613cf8ccbb1f9a162fdd15b3396ffb363aab3218691792b3e Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.969697 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6df9c98778-pwhd4"] Dec 10 15:49:00 crc kubenswrapper[5114]: I1210 15:49:00.998894 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-clgwg" event={"ID":"6568bc5a-ae55-48c0-b351-c5fbfafc3a6e","Type":"ContainerStarted","Data":"74980a3cedccf9f132c9f95a12b67062493c4f6a163850d9655a0f17f6e60c40"} Dec 10 15:49:01 crc kubenswrapper[5114]: I1210 15:49:01.001248 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-74b6b6789b-w8nsc" event={"ID":"67f931f3-1c81-4f43-b301-12f4a95b4e0d","Type":"ContainerStarted","Data":"6969dfad1a6ea32259d105a1b5228ba0199ac2d4c732a4d63737387ac2065cc0"} Dec 10 15:49:01 crc kubenswrapper[5114]: I1210 15:49:01.001310 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-74b6b6789b-w8nsc" event={"ID":"67f931f3-1c81-4f43-b301-12f4a95b4e0d","Type":"ContainerStarted","Data":"8e2969fe35dd23a613cf8ccbb1f9a162fdd15b3396ffb363aab3218691792b3e"} Dec 10 15:49:01 crc kubenswrapper[5114]: I1210 15:49:01.002304 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-74b6b6789b-w8nsc" Dec 10 15:49:01 crc kubenswrapper[5114]: I1210 15:49:01.003297 5114 patch_prober.go:28] interesting pod/route-controller-manager-74b6b6789b-w8nsc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Dec 10 15:49:01 crc kubenswrapper[5114]: I1210 15:49:01.003345 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-74b6b6789b-w8nsc" podUID="67f931f3-1c81-4f43-b301-12f4a95b4e0d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" Dec 10 15:49:01 crc kubenswrapper[5114]: I1210 15:49:01.005171 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9h94" event={"ID":"af5ea968-fe23-45bd-9ecd-8798399151e6","Type":"ContainerStarted","Data":"b24f20bead1556c4687abb8de7e3dbb3e669a42ea30c7f97f66017d0b3a4dc1e"} Dec 10 15:49:01 crc kubenswrapper[5114]: I1210 15:49:01.006485 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6df9c98778-pwhd4" event={"ID":"02c2b89c-3f6c-4a2e-98c3-beaf70f198c8","Type":"ContainerStarted","Data":"27bbfbf720aeaff6bc3ba3481772a7a37ce5d254d0e8b016560a9b8542246414"} Dec 10 15:49:01 crc kubenswrapper[5114]: I1210 15:49:01.007929 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-6bc87d94d7-5j9ds_88db556e-cb86-4720-bd46-ee54074d5b7a/controller-manager/0.log" Dec 10 15:49:01 crc kubenswrapper[5114]: I1210 15:49:01.008066 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds" event={"ID":"88db556e-cb86-4720-bd46-ee54074d5b7a","Type":"ContainerDied","Data":"208d0e23488dde85270efd91fe01bb17e78bb81488c4ad82256651767bc4f1a7"} Dec 10 15:49:01 crc kubenswrapper[5114]: I1210 15:49:01.008103 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds" Dec 10 15:49:01 crc kubenswrapper[5114]: I1210 15:49:01.008115 5114 scope.go:117] "RemoveContainer" containerID="9148aeae06668f10abd616e53f892141c9816baf4e980e2fde804f824cb120bd" Dec 10 15:49:01 crc kubenswrapper[5114]: I1210 15:49:01.010635 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2zlq" event={"ID":"3c04642b-9dc3-4509-a6d8-b03df365d743","Type":"ContainerStarted","Data":"e6657e38748580f439da23078e0cb9e5a2b1f5e5c156fdb520dc1c7cf3741012"} Dec 10 15:49:01 crc kubenswrapper[5114]: I1210 15:49:01.024249 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-clgwg" podStartSLOduration=10.004982408 podStartE2EDuration="38.024231652s" podCreationTimestamp="2025-12-10 15:48:23 +0000 UTC" firstStartedPulling="2025-12-10 15:48:27.404107438 +0000 UTC m=+133.124908625" lastFinishedPulling="2025-12-10 15:48:55.423356682 +0000 UTC m=+161.144157869" observedRunningTime="2025-12-10 15:49:01.020832557 +0000 UTC m=+166.741633754" watchObservedRunningTime="2025-12-10 15:49:01.024231652 +0000 UTC m=+166.745032829" Dec 10 15:49:01 crc kubenswrapper[5114]: I1210 15:49:01.046250 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-74b6b6789b-w8nsc" podStartSLOduration=5.046236095 podStartE2EDuration="5.046236095s" podCreationTimestamp="2025-12-10 15:48:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:49:01.045418534 +0000 UTC m=+166.766219711" watchObservedRunningTime="2025-12-10 15:49:01.046236095 +0000 UTC m=+166.767037272" Dec 10 15:49:01 crc kubenswrapper[5114]: I1210 15:49:01.071605 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g2zlq" podStartSLOduration=6.142646882 podStartE2EDuration="35.071587692s" podCreationTimestamp="2025-12-10 15:48:26 +0000 UTC" firstStartedPulling="2025-12-10 15:48:28.470176113 +0000 UTC m=+134.190977290" lastFinishedPulling="2025-12-10 15:48:57.399116923 +0000 UTC m=+163.119918100" observedRunningTime="2025-12-10 15:49:01.068403182 +0000 UTC m=+166.789204379" watchObservedRunningTime="2025-12-10 15:49:01.071587692 +0000 UTC m=+166.792388869" Dec 10 15:49:01 crc kubenswrapper[5114]: I1210 15:49:01.085986 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds"] Dec 10 15:49:01 crc kubenswrapper[5114]: I1210 15:49:01.094203 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6bc87d94d7-5j9ds"] Dec 10 15:49:02 crc kubenswrapper[5114]: I1210 15:49:02.016186 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6df9c98778-pwhd4" event={"ID":"02c2b89c-3f6c-4a2e-98c3-beaf70f198c8","Type":"ContainerStarted","Data":"16d53276a9a0cd6a4b0cd2ad8d28730199b03cba5d3dfd7b27f172b240370e65"} Dec 10 15:49:02 crc kubenswrapper[5114]: I1210 15:49:02.016321 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-6df9c98778-pwhd4" Dec 10 15:49:02 crc kubenswrapper[5114]: I1210 15:49:02.022653 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6df9c98778-pwhd4" Dec 10 15:49:02 crc kubenswrapper[5114]: I1210 15:49:02.024686 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-74b6b6789b-w8nsc" Dec 10 15:49:02 crc kubenswrapper[5114]: I1210 15:49:02.043017 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f9h94" podStartSLOduration=7.220965911 podStartE2EDuration="36.042984898s" podCreationTimestamp="2025-12-10 15:48:26 +0000 UTC" firstStartedPulling="2025-12-10 15:48:28.524480964 +0000 UTC m=+134.245282141" lastFinishedPulling="2025-12-10 15:48:57.346499951 +0000 UTC m=+163.067301128" observedRunningTime="2025-12-10 15:49:01.110994523 +0000 UTC m=+166.831795700" watchObservedRunningTime="2025-12-10 15:49:02.042984898 +0000 UTC m=+167.763786075" Dec 10 15:49:02 crc kubenswrapper[5114]: I1210 15:49:02.063795 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6df9c98778-pwhd4" podStartSLOduration=6.063778051 podStartE2EDuration="6.063778051s" podCreationTimestamp="2025-12-10 15:48:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:49:02.044910986 +0000 UTC m=+167.765712173" watchObservedRunningTime="2025-12-10 15:49:02.063778051 +0000 UTC m=+167.784579228" Dec 10 15:49:02 crc kubenswrapper[5114]: I1210 15:49:02.577296 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88db556e-cb86-4720-bd46-ee54074d5b7a" path="/var/lib/kubelet/pods/88db556e-cb86-4720-bd46-ee54074d5b7a/volumes" Dec 10 15:49:03 crc kubenswrapper[5114]: I1210 15:49:03.317897 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 10 15:49:04 crc kubenswrapper[5114]: I1210 15:49:04.104090 5114 ???:1] "http: TLS handshake error from 192.168.126.11:53922: no serving certificate available for the kubelet" Dec 10 15:49:04 crc kubenswrapper[5114]: I1210 15:49:04.676368 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-dvt8r" Dec 10 15:49:04 crc kubenswrapper[5114]: I1210 15:49:04.676405 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 10 15:49:04 crc kubenswrapper[5114]: I1210 15:49:04.676724 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 10 15:49:04 crc kubenswrapper[5114]: I1210 15:49:04.680474 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 10 15:49:04 crc kubenswrapper[5114]: I1210 15:49:04.686239 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-clgwg" Dec 10 15:49:04 crc kubenswrapper[5114]: I1210 15:49:04.686412 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dvt8r" Dec 10 15:49:04 crc kubenswrapper[5114]: I1210 15:49:04.686428 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-lfhws" Dec 10 15:49:04 crc kubenswrapper[5114]: I1210 15:49:04.686442 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-clgwg" Dec 10 15:49:04 crc kubenswrapper[5114]: I1210 15:49:04.686452 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lfhws" Dec 10 15:49:04 crc kubenswrapper[5114]: I1210 15:49:04.686461 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gn7sf" Dec 10 15:49:04 crc kubenswrapper[5114]: I1210 15:49:04.686515 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-clgwg" Dec 10 15:49:04 crc kubenswrapper[5114]: I1210 15:49:04.686574 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gn7sf" Dec 10 15:49:04 crc kubenswrapper[5114]: I1210 15:49:04.686583 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-gn7sf" Dec 10 15:49:04 crc kubenswrapper[5114]: I1210 15:49:04.686643 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dvt8r" Dec 10 15:49:04 crc kubenswrapper[5114]: I1210 15:49:04.686702 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lfhws" Dec 10 15:49:04 crc kubenswrapper[5114]: I1210 15:49:04.688289 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 10 15:49:04 crc kubenswrapper[5114]: I1210 15:49:04.734697 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dvt8r" Dec 10 15:49:04 crc kubenswrapper[5114]: I1210 15:49:04.735841 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19079ac7-d811-42ee-b363-c4f37ef7e499-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"19079ac7-d811-42ee-b363-c4f37ef7e499\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 10 15:49:04 crc kubenswrapper[5114]: I1210 15:49:04.735905 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/19079ac7-d811-42ee-b363-c4f37ef7e499-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"19079ac7-d811-42ee-b363-c4f37ef7e499\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 10 15:49:04 crc kubenswrapper[5114]: I1210 15:49:04.745540 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gn7sf" Dec 10 15:49:04 crc kubenswrapper[5114]: I1210 15:49:04.769611 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lfhws" Dec 10 15:49:04 crc kubenswrapper[5114]: I1210 15:49:04.837080 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19079ac7-d811-42ee-b363-c4f37ef7e499-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"19079ac7-d811-42ee-b363-c4f37ef7e499\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 10 15:49:04 crc kubenswrapper[5114]: I1210 15:49:04.837132 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/19079ac7-d811-42ee-b363-c4f37ef7e499-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"19079ac7-d811-42ee-b363-c4f37ef7e499\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 10 15:49:04 crc kubenswrapper[5114]: I1210 15:49:04.837223 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/19079ac7-d811-42ee-b363-c4f37ef7e499-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"19079ac7-d811-42ee-b363-c4f37ef7e499\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 10 15:49:04 crc kubenswrapper[5114]: I1210 15:49:04.859427 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19079ac7-d811-42ee-b363-c4f37ef7e499-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"19079ac7-d811-42ee-b363-c4f37ef7e499\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 10 15:49:05 crc kubenswrapper[5114]: I1210 15:49:05.025364 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 10 15:49:05 crc kubenswrapper[5114]: I1210 15:49:05.238869 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 10 15:49:05 crc kubenswrapper[5114]: E1210 15:49:05.454639 5114 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b6e28a6_b1a9_4942_8457_e54258393016.slice/crio-conmon-1b1a8fa0e80fd36fe13e3dd77a7af89a418a45139b9e394260c5c24cb90fde7c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b6e28a6_b1a9_4942_8457_e54258393016.slice/crio-1b1a8fa0e80fd36fe13e3dd77a7af89a418a45139b9e394260c5c24cb90fde7c.scope\": RecentStats: unable to find data in memory cache]" Dec 10 15:49:06 crc kubenswrapper[5114]: I1210 15:49:06.041992 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"19079ac7-d811-42ee-b363-c4f37ef7e499","Type":"ContainerStarted","Data":"8088abe50f13102c9cda23abc6674a451b24dd5d9ce88d89ca0be0160abb68b3"} Dec 10 15:49:07 crc kubenswrapper[5114]: I1210 15:49:07.122461 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qbbrv" Dec 10 15:49:07 crc kubenswrapper[5114]: I1210 15:49:07.122503 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-qbbrv" Dec 10 15:49:07 crc kubenswrapper[5114]: I1210 15:49:07.128519 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tkn7z" Dec 10 15:49:07 crc kubenswrapper[5114]: I1210 15:49:07.128724 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-tkn7z" Dec 10 15:49:07 crc kubenswrapper[5114]: I1210 15:49:07.159498 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g2zlq" Dec 10 15:49:07 crc kubenswrapper[5114]: I1210 15:49:07.159834 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-g2zlq" Dec 10 15:49:07 crc kubenswrapper[5114]: I1210 15:49:07.163590 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qbbrv" Dec 10 15:49:07 crc kubenswrapper[5114]: I1210 15:49:07.173460 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f9h94" Dec 10 15:49:07 crc kubenswrapper[5114]: I1210 15:49:07.173566 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-f9h94" Dec 10 15:49:07 crc kubenswrapper[5114]: I1210 15:49:07.181907 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tkn7z" Dec 10 15:49:07 crc kubenswrapper[5114]: I1210 15:49:07.223836 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f9h94" Dec 10 15:49:07 crc kubenswrapper[5114]: I1210 15:49:07.234549 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g2zlq" Dec 10 15:49:07 crc kubenswrapper[5114]: I1210 15:49:07.918786 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 10 15:49:08 crc kubenswrapper[5114]: I1210 15:49:08.410596 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"19079ac7-d811-42ee-b363-c4f37ef7e499","Type":"ContainerStarted","Data":"cfcecb1f8ebcd71ec34b45886d8fc81b39c2c3104d442c6154876b72f47f2837"} Dec 10 15:49:08 crc kubenswrapper[5114]: I1210 15:49:08.410639 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 10 15:49:08 crc kubenswrapper[5114]: I1210 15:49:08.410654 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gn7sf"] Dec 10 15:49:08 crc kubenswrapper[5114]: I1210 15:49:08.412515 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gn7sf" podUID="949ddda2-62c3-484c-9034-3b447502cf4d" containerName="registry-server" containerID="cri-o://d70786639c1b7bd477f3fbd28fce6f38be15e98017d624b1c40a7fd7e3248d8e" gracePeriod=2 Dec 10 15:49:08 crc kubenswrapper[5114]: I1210 15:49:08.414316 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 10 15:49:08 crc kubenswrapper[5114]: I1210 15:49:08.454669 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=5.454652818 podStartE2EDuration="5.454652818s" podCreationTimestamp="2025-12-10 15:49:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:49:08.452730539 +0000 UTC m=+174.173531716" watchObservedRunningTime="2025-12-10 15:49:08.454652818 +0000 UTC m=+174.175453995" Dec 10 15:49:08 crc kubenswrapper[5114]: I1210 15:49:08.468671 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f9h94" Dec 10 15:49:08 crc kubenswrapper[5114]: I1210 15:49:08.483737 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qbbrv" Dec 10 15:49:08 crc kubenswrapper[5114]: I1210 15:49:08.486024 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2d48d128-3260-43c9-ab7a-d41717d59b73-var-lock\") pod \"installer-12-crc\" (UID: \"2d48d128-3260-43c9-ab7a-d41717d59b73\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 10 15:49:08 crc kubenswrapper[5114]: I1210 15:49:08.486308 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d48d128-3260-43c9-ab7a-d41717d59b73-kubelet-dir\") pod \"installer-12-crc\" (UID: \"2d48d128-3260-43c9-ab7a-d41717d59b73\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 10 15:49:08 crc kubenswrapper[5114]: I1210 15:49:08.486374 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d48d128-3260-43c9-ab7a-d41717d59b73-kube-api-access\") pod \"installer-12-crc\" (UID: \"2d48d128-3260-43c9-ab7a-d41717d59b73\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 10 15:49:08 crc kubenswrapper[5114]: I1210 15:49:08.493925 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g2zlq" Dec 10 15:49:08 crc kubenswrapper[5114]: I1210 15:49:08.497484 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tkn7z" Dec 10 15:49:08 crc kubenswrapper[5114]: I1210 15:49:08.593008 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2d48d128-3260-43c9-ab7a-d41717d59b73-var-lock\") pod \"installer-12-crc\" (UID: \"2d48d128-3260-43c9-ab7a-d41717d59b73\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 10 15:49:08 crc kubenswrapper[5114]: I1210 15:49:08.593145 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d48d128-3260-43c9-ab7a-d41717d59b73-kubelet-dir\") pod \"installer-12-crc\" (UID: \"2d48d128-3260-43c9-ab7a-d41717d59b73\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 10 15:49:08 crc kubenswrapper[5114]: I1210 15:49:08.593206 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d48d128-3260-43c9-ab7a-d41717d59b73-kube-api-access\") pod \"installer-12-crc\" (UID: \"2d48d128-3260-43c9-ab7a-d41717d59b73\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 10 15:49:08 crc kubenswrapper[5114]: I1210 15:49:08.593790 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2d48d128-3260-43c9-ab7a-d41717d59b73-var-lock\") pod \"installer-12-crc\" (UID: \"2d48d128-3260-43c9-ab7a-d41717d59b73\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 10 15:49:08 crc kubenswrapper[5114]: I1210 15:49:08.593867 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d48d128-3260-43c9-ab7a-d41717d59b73-kubelet-dir\") pod \"installer-12-crc\" (UID: \"2d48d128-3260-43c9-ab7a-d41717d59b73\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 10 15:49:08 crc kubenswrapper[5114]: I1210 15:49:08.631963 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d48d128-3260-43c9-ab7a-d41717d59b73-kube-api-access\") pod \"installer-12-crc\" (UID: \"2d48d128-3260-43c9-ab7a-d41717d59b73\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 10 15:49:08 crc kubenswrapper[5114]: I1210 15:49:08.792601 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gn7sf" Dec 10 15:49:08 crc kubenswrapper[5114]: I1210 15:49:08.796164 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bh747\" (UniqueName: \"kubernetes.io/projected/949ddda2-62c3-484c-9034-3b447502cf4d-kube-api-access-bh747\") pod \"949ddda2-62c3-484c-9034-3b447502cf4d\" (UID: \"949ddda2-62c3-484c-9034-3b447502cf4d\") " Dec 10 15:49:08 crc kubenswrapper[5114]: I1210 15:49:08.796431 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/949ddda2-62c3-484c-9034-3b447502cf4d-catalog-content\") pod \"949ddda2-62c3-484c-9034-3b447502cf4d\" (UID: \"949ddda2-62c3-484c-9034-3b447502cf4d\") " Dec 10 15:49:08 crc kubenswrapper[5114]: I1210 15:49:08.796511 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/949ddda2-62c3-484c-9034-3b447502cf4d-utilities\") pod \"949ddda2-62c3-484c-9034-3b447502cf4d\" (UID: \"949ddda2-62c3-484c-9034-3b447502cf4d\") " Dec 10 15:49:08 crc kubenswrapper[5114]: I1210 15:49:08.797575 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/949ddda2-62c3-484c-9034-3b447502cf4d-utilities" (OuterVolumeSpecName: "utilities") pod "949ddda2-62c3-484c-9034-3b447502cf4d" (UID: "949ddda2-62c3-484c-9034-3b447502cf4d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:49:08 crc kubenswrapper[5114]: I1210 15:49:08.804922 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/949ddda2-62c3-484c-9034-3b447502cf4d-kube-api-access-bh747" (OuterVolumeSpecName: "kube-api-access-bh747") pod "949ddda2-62c3-484c-9034-3b447502cf4d" (UID: "949ddda2-62c3-484c-9034-3b447502cf4d"). InnerVolumeSpecName "kube-api-access-bh747". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:49:08 crc kubenswrapper[5114]: I1210 15:49:08.823696 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 10 15:49:08 crc kubenswrapper[5114]: I1210 15:49:08.841713 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/949ddda2-62c3-484c-9034-3b447502cf4d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "949ddda2-62c3-484c-9034-3b447502cf4d" (UID: "949ddda2-62c3-484c-9034-3b447502cf4d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:49:08 crc kubenswrapper[5114]: I1210 15:49:08.900893 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bh747\" (UniqueName: \"kubernetes.io/projected/949ddda2-62c3-484c-9034-3b447502cf4d-kube-api-access-bh747\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:08 crc kubenswrapper[5114]: I1210 15:49:08.900940 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/949ddda2-62c3-484c-9034-3b447502cf4d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:08 crc kubenswrapper[5114]: I1210 15:49:08.900956 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/949ddda2-62c3-484c-9034-3b447502cf4d-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:09 crc kubenswrapper[5114]: I1210 15:49:09.058812 5114 generic.go:358] "Generic (PLEG): container finished" podID="19079ac7-d811-42ee-b363-c4f37ef7e499" containerID="cfcecb1f8ebcd71ec34b45886d8fc81b39c2c3104d442c6154876b72f47f2837" exitCode=0 Dec 10 15:49:09 crc kubenswrapper[5114]: I1210 15:49:09.058986 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"19079ac7-d811-42ee-b363-c4f37ef7e499","Type":"ContainerDied","Data":"cfcecb1f8ebcd71ec34b45886d8fc81b39c2c3104d442c6154876b72f47f2837"} Dec 10 15:49:09 crc kubenswrapper[5114]: I1210 15:49:09.062504 5114 generic.go:358] "Generic (PLEG): container finished" podID="949ddda2-62c3-484c-9034-3b447502cf4d" containerID="d70786639c1b7bd477f3fbd28fce6f38be15e98017d624b1c40a7fd7e3248d8e" exitCode=0 Dec 10 15:49:09 crc kubenswrapper[5114]: I1210 15:49:09.062570 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gn7sf" Dec 10 15:49:09 crc kubenswrapper[5114]: I1210 15:49:09.062653 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gn7sf" event={"ID":"949ddda2-62c3-484c-9034-3b447502cf4d","Type":"ContainerDied","Data":"d70786639c1b7bd477f3fbd28fce6f38be15e98017d624b1c40a7fd7e3248d8e"} Dec 10 15:49:09 crc kubenswrapper[5114]: I1210 15:49:09.062704 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gn7sf" event={"ID":"949ddda2-62c3-484c-9034-3b447502cf4d","Type":"ContainerDied","Data":"757b4dbf81394fffa875318b552260c6723fb5afa0fa7e83b4e93757491a6053"} Dec 10 15:49:09 crc kubenswrapper[5114]: I1210 15:49:09.062726 5114 scope.go:117] "RemoveContainer" containerID="d70786639c1b7bd477f3fbd28fce6f38be15e98017d624b1c40a7fd7e3248d8e" Dec 10 15:49:09 crc kubenswrapper[5114]: I1210 15:49:09.090739 5114 scope.go:117] "RemoveContainer" containerID="f06c4db4717815fb7a2e9b71612780d4381b4103e5f68f6af7dce6c98eeca4b7" Dec 10 15:49:09 crc kubenswrapper[5114]: I1210 15:49:09.113854 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gn7sf"] Dec 10 15:49:09 crc kubenswrapper[5114]: I1210 15:49:09.116800 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gn7sf"] Dec 10 15:49:09 crc kubenswrapper[5114]: I1210 15:49:09.123164 5114 scope.go:117] "RemoveContainer" containerID="3f63220ad1e81cf1ec1cf51fdf80bbe382704c1b95e63dfef1571f666821dcb6" Dec 10 15:49:09 crc kubenswrapper[5114]: I1210 15:49:09.137735 5114 scope.go:117] "RemoveContainer" containerID="d70786639c1b7bd477f3fbd28fce6f38be15e98017d624b1c40a7fd7e3248d8e" Dec 10 15:49:09 crc kubenswrapper[5114]: E1210 15:49:09.138117 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d70786639c1b7bd477f3fbd28fce6f38be15e98017d624b1c40a7fd7e3248d8e\": container with ID starting with d70786639c1b7bd477f3fbd28fce6f38be15e98017d624b1c40a7fd7e3248d8e not found: ID does not exist" containerID="d70786639c1b7bd477f3fbd28fce6f38be15e98017d624b1c40a7fd7e3248d8e" Dec 10 15:49:09 crc kubenswrapper[5114]: I1210 15:49:09.138156 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d70786639c1b7bd477f3fbd28fce6f38be15e98017d624b1c40a7fd7e3248d8e"} err="failed to get container status \"d70786639c1b7bd477f3fbd28fce6f38be15e98017d624b1c40a7fd7e3248d8e\": rpc error: code = NotFound desc = could not find container \"d70786639c1b7bd477f3fbd28fce6f38be15e98017d624b1c40a7fd7e3248d8e\": container with ID starting with d70786639c1b7bd477f3fbd28fce6f38be15e98017d624b1c40a7fd7e3248d8e not found: ID does not exist" Dec 10 15:49:09 crc kubenswrapper[5114]: I1210 15:49:09.138184 5114 scope.go:117] "RemoveContainer" containerID="f06c4db4717815fb7a2e9b71612780d4381b4103e5f68f6af7dce6c98eeca4b7" Dec 10 15:49:09 crc kubenswrapper[5114]: E1210 15:49:09.138474 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f06c4db4717815fb7a2e9b71612780d4381b4103e5f68f6af7dce6c98eeca4b7\": container with ID starting with f06c4db4717815fb7a2e9b71612780d4381b4103e5f68f6af7dce6c98eeca4b7 not found: ID does not exist" containerID="f06c4db4717815fb7a2e9b71612780d4381b4103e5f68f6af7dce6c98eeca4b7" Dec 10 15:49:09 crc kubenswrapper[5114]: I1210 15:49:09.138499 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f06c4db4717815fb7a2e9b71612780d4381b4103e5f68f6af7dce6c98eeca4b7"} err="failed to get container status \"f06c4db4717815fb7a2e9b71612780d4381b4103e5f68f6af7dce6c98eeca4b7\": rpc error: code = NotFound desc = could not find container \"f06c4db4717815fb7a2e9b71612780d4381b4103e5f68f6af7dce6c98eeca4b7\": container with ID starting with f06c4db4717815fb7a2e9b71612780d4381b4103e5f68f6af7dce6c98eeca4b7 not found: ID does not exist" Dec 10 15:49:09 crc kubenswrapper[5114]: I1210 15:49:09.138528 5114 scope.go:117] "RemoveContainer" containerID="3f63220ad1e81cf1ec1cf51fdf80bbe382704c1b95e63dfef1571f666821dcb6" Dec 10 15:49:09 crc kubenswrapper[5114]: E1210 15:49:09.138730 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f63220ad1e81cf1ec1cf51fdf80bbe382704c1b95e63dfef1571f666821dcb6\": container with ID starting with 3f63220ad1e81cf1ec1cf51fdf80bbe382704c1b95e63dfef1571f666821dcb6 not found: ID does not exist" containerID="3f63220ad1e81cf1ec1cf51fdf80bbe382704c1b95e63dfef1571f666821dcb6" Dec 10 15:49:09 crc kubenswrapper[5114]: I1210 15:49:09.138755 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f63220ad1e81cf1ec1cf51fdf80bbe382704c1b95e63dfef1571f666821dcb6"} err="failed to get container status \"3f63220ad1e81cf1ec1cf51fdf80bbe382704c1b95e63dfef1571f666821dcb6\": rpc error: code = NotFound desc = could not find container \"3f63220ad1e81cf1ec1cf51fdf80bbe382704c1b95e63dfef1571f666821dcb6\": container with ID starting with 3f63220ad1e81cf1ec1cf51fdf80bbe382704c1b95e63dfef1571f666821dcb6 not found: ID does not exist" Dec 10 15:49:09 crc kubenswrapper[5114]: I1210 15:49:09.216862 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 10 15:49:10 crc kubenswrapper[5114]: I1210 15:49:10.076241 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"2d48d128-3260-43c9-ab7a-d41717d59b73","Type":"ContainerStarted","Data":"46573980633b4214a9f903f785503b79854fffc1be251a7966a4b3463188959f"} Dec 10 15:49:10 crc kubenswrapper[5114]: I1210 15:49:10.076891 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"2d48d128-3260-43c9-ab7a-d41717d59b73","Type":"ContainerStarted","Data":"c058f4e5b0c0525f17e3bf1a4d036a9dd1a61b3ff372361ae14688d265e39395"} Dec 10 15:49:10 crc kubenswrapper[5114]: I1210 15:49:10.097645 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=3.097621104 podStartE2EDuration="3.097621104s" podCreationTimestamp="2025-12-10 15:49:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:49:10.096074665 +0000 UTC m=+175.816875872" watchObservedRunningTime="2025-12-10 15:49:10.097621104 +0000 UTC m=+175.818422281" Dec 10 15:49:10 crc kubenswrapper[5114]: I1210 15:49:10.327375 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qbbrv"] Dec 10 15:49:10 crc kubenswrapper[5114]: I1210 15:49:10.327637 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qbbrv" podUID="d38bc69a-988a-4bdc-9141-dc5d0019908e" containerName="registry-server" containerID="cri-o://7b380093df269184a5d507bf3ce2b5a7b55c76aef79ba3587135ced20a3040c1" gracePeriod=2 Dec 10 15:49:10 crc kubenswrapper[5114]: I1210 15:49:10.346849 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 10 15:49:10 crc kubenswrapper[5114]: I1210 15:49:10.422795 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19079ac7-d811-42ee-b363-c4f37ef7e499-kube-api-access\") pod \"19079ac7-d811-42ee-b363-c4f37ef7e499\" (UID: \"19079ac7-d811-42ee-b363-c4f37ef7e499\") " Dec 10 15:49:10 crc kubenswrapper[5114]: I1210 15:49:10.422911 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/19079ac7-d811-42ee-b363-c4f37ef7e499-kubelet-dir\") pod \"19079ac7-d811-42ee-b363-c4f37ef7e499\" (UID: \"19079ac7-d811-42ee-b363-c4f37ef7e499\") " Dec 10 15:49:10 crc kubenswrapper[5114]: I1210 15:49:10.423033 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19079ac7-d811-42ee-b363-c4f37ef7e499-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "19079ac7-d811-42ee-b363-c4f37ef7e499" (UID: "19079ac7-d811-42ee-b363-c4f37ef7e499"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 15:49:10 crc kubenswrapper[5114]: I1210 15:49:10.423380 5114 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/19079ac7-d811-42ee-b363-c4f37ef7e499-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:10 crc kubenswrapper[5114]: I1210 15:49:10.429646 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19079ac7-d811-42ee-b363-c4f37ef7e499-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "19079ac7-d811-42ee-b363-c4f37ef7e499" (UID: "19079ac7-d811-42ee-b363-c4f37ef7e499"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:49:10 crc kubenswrapper[5114]: I1210 15:49:10.524485 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19079ac7-d811-42ee-b363-c4f37ef7e499-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:10 crc kubenswrapper[5114]: I1210 15:49:10.531288 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f9h94"] Dec 10 15:49:10 crc kubenswrapper[5114]: I1210 15:49:10.576785 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="949ddda2-62c3-484c-9034-3b447502cf4d" path="/var/lib/kubelet/pods/949ddda2-62c3-484c-9034-3b447502cf4d/volumes" Dec 10 15:49:10 crc kubenswrapper[5114]: I1210 15:49:10.793157 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qbbrv" Dec 10 15:49:10 crc kubenswrapper[5114]: I1210 15:49:10.830614 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxq2x\" (UniqueName: \"kubernetes.io/projected/d38bc69a-988a-4bdc-9141-dc5d0019908e-kube-api-access-qxq2x\") pod \"d38bc69a-988a-4bdc-9141-dc5d0019908e\" (UID: \"d38bc69a-988a-4bdc-9141-dc5d0019908e\") " Dec 10 15:49:10 crc kubenswrapper[5114]: I1210 15:49:10.831323 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d38bc69a-988a-4bdc-9141-dc5d0019908e-utilities\") pod \"d38bc69a-988a-4bdc-9141-dc5d0019908e\" (UID: \"d38bc69a-988a-4bdc-9141-dc5d0019908e\") " Dec 10 15:49:10 crc kubenswrapper[5114]: I1210 15:49:10.831598 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d38bc69a-988a-4bdc-9141-dc5d0019908e-catalog-content\") pod \"d38bc69a-988a-4bdc-9141-dc5d0019908e\" (UID: \"d38bc69a-988a-4bdc-9141-dc5d0019908e\") " Dec 10 15:49:10 crc kubenswrapper[5114]: I1210 15:49:10.835403 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d38bc69a-988a-4bdc-9141-dc5d0019908e-kube-api-access-qxq2x" (OuterVolumeSpecName: "kube-api-access-qxq2x") pod "d38bc69a-988a-4bdc-9141-dc5d0019908e" (UID: "d38bc69a-988a-4bdc-9141-dc5d0019908e"). InnerVolumeSpecName "kube-api-access-qxq2x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:49:10 crc kubenswrapper[5114]: I1210 15:49:10.835858 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d38bc69a-988a-4bdc-9141-dc5d0019908e-utilities" (OuterVolumeSpecName: "utilities") pod "d38bc69a-988a-4bdc-9141-dc5d0019908e" (UID: "d38bc69a-988a-4bdc-9141-dc5d0019908e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:49:10 crc kubenswrapper[5114]: I1210 15:49:10.838779 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qxq2x\" (UniqueName: \"kubernetes.io/projected/d38bc69a-988a-4bdc-9141-dc5d0019908e-kube-api-access-qxq2x\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:10 crc kubenswrapper[5114]: I1210 15:49:10.838813 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d38bc69a-988a-4bdc-9141-dc5d0019908e-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:10 crc kubenswrapper[5114]: I1210 15:49:10.853787 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d38bc69a-988a-4bdc-9141-dc5d0019908e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d38bc69a-988a-4bdc-9141-dc5d0019908e" (UID: "d38bc69a-988a-4bdc-9141-dc5d0019908e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:49:10 crc kubenswrapper[5114]: I1210 15:49:10.941868 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d38bc69a-988a-4bdc-9141-dc5d0019908e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:11 crc kubenswrapper[5114]: I1210 15:49:11.083433 5114 generic.go:358] "Generic (PLEG): container finished" podID="d38bc69a-988a-4bdc-9141-dc5d0019908e" containerID="7b380093df269184a5d507bf3ce2b5a7b55c76aef79ba3587135ced20a3040c1" exitCode=0 Dec 10 15:49:11 crc kubenswrapper[5114]: I1210 15:49:11.083464 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qbbrv" Dec 10 15:49:11 crc kubenswrapper[5114]: I1210 15:49:11.083460 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qbbrv" event={"ID":"d38bc69a-988a-4bdc-9141-dc5d0019908e","Type":"ContainerDied","Data":"7b380093df269184a5d507bf3ce2b5a7b55c76aef79ba3587135ced20a3040c1"} Dec 10 15:49:11 crc kubenswrapper[5114]: I1210 15:49:11.083509 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qbbrv" event={"ID":"d38bc69a-988a-4bdc-9141-dc5d0019908e","Type":"ContainerDied","Data":"570347bc52216d64f007fa1f9cf2b836e1e8463f2006243f0ed527757cc868de"} Dec 10 15:49:11 crc kubenswrapper[5114]: I1210 15:49:11.083529 5114 scope.go:117] "RemoveContainer" containerID="7b380093df269184a5d507bf3ce2b5a7b55c76aef79ba3587135ced20a3040c1" Dec 10 15:49:11 crc kubenswrapper[5114]: I1210 15:49:11.088441 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"19079ac7-d811-42ee-b363-c4f37ef7e499","Type":"ContainerDied","Data":"8088abe50f13102c9cda23abc6674a451b24dd5d9ce88d89ca0be0160abb68b3"} Dec 10 15:49:11 crc kubenswrapper[5114]: I1210 15:49:11.088480 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8088abe50f13102c9cda23abc6674a451b24dd5d9ce88d89ca0be0160abb68b3" Dec 10 15:49:11 crc kubenswrapper[5114]: I1210 15:49:11.088650 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f9h94" podUID="af5ea968-fe23-45bd-9ecd-8798399151e6" containerName="registry-server" containerID="cri-o://b24f20bead1556c4687abb8de7e3dbb3e669a42ea30c7f97f66017d0b3a4dc1e" gracePeriod=2 Dec 10 15:49:11 crc kubenswrapper[5114]: I1210 15:49:11.088927 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 10 15:49:11 crc kubenswrapper[5114]: I1210 15:49:11.120326 5114 scope.go:117] "RemoveContainer" containerID="9f9d0bb4f7df47304c385f21703f40715f873d1bc775aabb5d36435ab4390599" Dec 10 15:49:11 crc kubenswrapper[5114]: I1210 15:49:11.126211 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qbbrv"] Dec 10 15:49:11 crc kubenswrapper[5114]: I1210 15:49:11.132568 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qbbrv"] Dec 10 15:49:11 crc kubenswrapper[5114]: I1210 15:49:11.153142 5114 scope.go:117] "RemoveContainer" containerID="acec7dad167d0b7bce6874e2e0eba5c8bf358b813840242b5d494056fd33b5f1" Dec 10 15:49:11 crc kubenswrapper[5114]: I1210 15:49:11.168869 5114 scope.go:117] "RemoveContainer" containerID="7b380093df269184a5d507bf3ce2b5a7b55c76aef79ba3587135ced20a3040c1" Dec 10 15:49:11 crc kubenswrapper[5114]: E1210 15:49:11.169670 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b380093df269184a5d507bf3ce2b5a7b55c76aef79ba3587135ced20a3040c1\": container with ID starting with 7b380093df269184a5d507bf3ce2b5a7b55c76aef79ba3587135ced20a3040c1 not found: ID does not exist" containerID="7b380093df269184a5d507bf3ce2b5a7b55c76aef79ba3587135ced20a3040c1" Dec 10 15:49:11 crc kubenswrapper[5114]: I1210 15:49:11.169734 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b380093df269184a5d507bf3ce2b5a7b55c76aef79ba3587135ced20a3040c1"} err="failed to get container status \"7b380093df269184a5d507bf3ce2b5a7b55c76aef79ba3587135ced20a3040c1\": rpc error: code = NotFound desc = could not find container \"7b380093df269184a5d507bf3ce2b5a7b55c76aef79ba3587135ced20a3040c1\": container with ID starting with 7b380093df269184a5d507bf3ce2b5a7b55c76aef79ba3587135ced20a3040c1 not found: ID does not exist" Dec 10 15:49:11 crc kubenswrapper[5114]: I1210 15:49:11.169762 5114 scope.go:117] "RemoveContainer" containerID="9f9d0bb4f7df47304c385f21703f40715f873d1bc775aabb5d36435ab4390599" Dec 10 15:49:11 crc kubenswrapper[5114]: E1210 15:49:11.170010 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f9d0bb4f7df47304c385f21703f40715f873d1bc775aabb5d36435ab4390599\": container with ID starting with 9f9d0bb4f7df47304c385f21703f40715f873d1bc775aabb5d36435ab4390599 not found: ID does not exist" containerID="9f9d0bb4f7df47304c385f21703f40715f873d1bc775aabb5d36435ab4390599" Dec 10 15:49:11 crc kubenswrapper[5114]: I1210 15:49:11.170038 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f9d0bb4f7df47304c385f21703f40715f873d1bc775aabb5d36435ab4390599"} err="failed to get container status \"9f9d0bb4f7df47304c385f21703f40715f873d1bc775aabb5d36435ab4390599\": rpc error: code = NotFound desc = could not find container \"9f9d0bb4f7df47304c385f21703f40715f873d1bc775aabb5d36435ab4390599\": container with ID starting with 9f9d0bb4f7df47304c385f21703f40715f873d1bc775aabb5d36435ab4390599 not found: ID does not exist" Dec 10 15:49:11 crc kubenswrapper[5114]: I1210 15:49:11.170054 5114 scope.go:117] "RemoveContainer" containerID="acec7dad167d0b7bce6874e2e0eba5c8bf358b813840242b5d494056fd33b5f1" Dec 10 15:49:11 crc kubenswrapper[5114]: E1210 15:49:11.170257 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acec7dad167d0b7bce6874e2e0eba5c8bf358b813840242b5d494056fd33b5f1\": container with ID starting with acec7dad167d0b7bce6874e2e0eba5c8bf358b813840242b5d494056fd33b5f1 not found: ID does not exist" containerID="acec7dad167d0b7bce6874e2e0eba5c8bf358b813840242b5d494056fd33b5f1" Dec 10 15:49:11 crc kubenswrapper[5114]: I1210 15:49:11.170345 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acec7dad167d0b7bce6874e2e0eba5c8bf358b813840242b5d494056fd33b5f1"} err="failed to get container status \"acec7dad167d0b7bce6874e2e0eba5c8bf358b813840242b5d494056fd33b5f1\": rpc error: code = NotFound desc = could not find container \"acec7dad167d0b7bce6874e2e0eba5c8bf358b813840242b5d494056fd33b5f1\": container with ID starting with acec7dad167d0b7bce6874e2e0eba5c8bf358b813840242b5d494056fd33b5f1 not found: ID does not exist" Dec 10 15:49:12 crc kubenswrapper[5114]: I1210 15:49:12.576618 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d38bc69a-988a-4bdc-9141-dc5d0019908e" path="/var/lib/kubelet/pods/d38bc69a-988a-4bdc-9141-dc5d0019908e/volumes" Dec 10 15:49:13 crc kubenswrapper[5114]: I1210 15:49:13.103056 5114 generic.go:358] "Generic (PLEG): container finished" podID="af5ea968-fe23-45bd-9ecd-8798399151e6" containerID="b24f20bead1556c4687abb8de7e3dbb3e669a42ea30c7f97f66017d0b3a4dc1e" exitCode=0 Dec 10 15:49:13 crc kubenswrapper[5114]: I1210 15:49:13.103157 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9h94" event={"ID":"af5ea968-fe23-45bd-9ecd-8798399151e6","Type":"ContainerDied","Data":"b24f20bead1556c4687abb8de7e3dbb3e669a42ea30c7f97f66017d0b3a4dc1e"} Dec 10 15:49:14 crc kubenswrapper[5114]: I1210 15:49:14.737080 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-clgwg" Dec 10 15:49:15 crc kubenswrapper[5114]: I1210 15:49:15.013341 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9h94" Dec 10 15:49:15 crc kubenswrapper[5114]: I1210 15:49:15.103848 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af5ea968-fe23-45bd-9ecd-8798399151e6-utilities\") pod \"af5ea968-fe23-45bd-9ecd-8798399151e6\" (UID: \"af5ea968-fe23-45bd-9ecd-8798399151e6\") " Dec 10 15:49:15 crc kubenswrapper[5114]: I1210 15:49:15.103982 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gc72d\" (UniqueName: \"kubernetes.io/projected/af5ea968-fe23-45bd-9ecd-8798399151e6-kube-api-access-gc72d\") pod \"af5ea968-fe23-45bd-9ecd-8798399151e6\" (UID: \"af5ea968-fe23-45bd-9ecd-8798399151e6\") " Dec 10 15:49:15 crc kubenswrapper[5114]: I1210 15:49:15.104027 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af5ea968-fe23-45bd-9ecd-8798399151e6-catalog-content\") pod \"af5ea968-fe23-45bd-9ecd-8798399151e6\" (UID: \"af5ea968-fe23-45bd-9ecd-8798399151e6\") " Dec 10 15:49:15 crc kubenswrapper[5114]: I1210 15:49:15.105043 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af5ea968-fe23-45bd-9ecd-8798399151e6-utilities" (OuterVolumeSpecName: "utilities") pod "af5ea968-fe23-45bd-9ecd-8798399151e6" (UID: "af5ea968-fe23-45bd-9ecd-8798399151e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:49:15 crc kubenswrapper[5114]: I1210 15:49:15.118116 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9h94" Dec 10 15:49:15 crc kubenswrapper[5114]: I1210 15:49:15.118149 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9h94" event={"ID":"af5ea968-fe23-45bd-9ecd-8798399151e6","Type":"ContainerDied","Data":"e4813820b4848acb02a7b2dd137f43301b6aac72e597ee7c1a802c9084744038"} Dec 10 15:49:15 crc kubenswrapper[5114]: I1210 15:49:15.118255 5114 scope.go:117] "RemoveContainer" containerID="b24f20bead1556c4687abb8de7e3dbb3e669a42ea30c7f97f66017d0b3a4dc1e" Dec 10 15:49:15 crc kubenswrapper[5114]: I1210 15:49:15.191416 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af5ea968-fe23-45bd-9ecd-8798399151e6-kube-api-access-gc72d" (OuterVolumeSpecName: "kube-api-access-gc72d") pod "af5ea968-fe23-45bd-9ecd-8798399151e6" (UID: "af5ea968-fe23-45bd-9ecd-8798399151e6"). InnerVolumeSpecName "kube-api-access-gc72d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:49:15 crc kubenswrapper[5114]: I1210 15:49:15.205655 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af5ea968-fe23-45bd-9ecd-8798399151e6-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:15 crc kubenswrapper[5114]: I1210 15:49:15.205702 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gc72d\" (UniqueName: \"kubernetes.io/projected/af5ea968-fe23-45bd-9ecd-8798399151e6-kube-api-access-gc72d\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:15 crc kubenswrapper[5114]: I1210 15:49:15.223183 5114 scope.go:117] "RemoveContainer" containerID="c2366ec02f86243c57d82eabc6b72016feebe73be160b68de4cdff4895790f69" Dec 10 15:49:15 crc kubenswrapper[5114]: I1210 15:49:15.231616 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af5ea968-fe23-45bd-9ecd-8798399151e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af5ea968-fe23-45bd-9ecd-8798399151e6" (UID: "af5ea968-fe23-45bd-9ecd-8798399151e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:49:15 crc kubenswrapper[5114]: I1210 15:49:15.247783 5114 scope.go:117] "RemoveContainer" containerID="7e80796ee88d4b41491d97fd417dfc84667a2b4e3b3e13d4b4c8d40749f31cb5" Dec 10 15:49:15 crc kubenswrapper[5114]: I1210 15:49:15.307035 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af5ea968-fe23-45bd-9ecd-8798399151e6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:15 crc kubenswrapper[5114]: I1210 15:49:15.448926 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f9h94"] Dec 10 15:49:15 crc kubenswrapper[5114]: I1210 15:49:15.453599 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f9h94"] Dec 10 15:49:16 crc kubenswrapper[5114]: I1210 15:49:16.574492 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af5ea968-fe23-45bd-9ecd-8798399151e6" path="/var/lib/kubelet/pods/af5ea968-fe23-45bd-9ecd-8798399151e6/volumes" Dec 10 15:49:18 crc kubenswrapper[5114]: I1210 15:49:18.938993 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-clgwg"] Dec 10 15:49:18 crc kubenswrapper[5114]: I1210 15:49:18.939831 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-clgwg" podUID="6568bc5a-ae55-48c0-b351-c5fbfafc3a6e" containerName="registry-server" containerID="cri-o://74980a3cedccf9f132c9f95a12b67062493c4f6a163850d9655a0f17f6e60c40" gracePeriod=2 Dec 10 15:49:19 crc kubenswrapper[5114]: I1210 15:49:19.153105 5114 generic.go:358] "Generic (PLEG): container finished" podID="6568bc5a-ae55-48c0-b351-c5fbfafc3a6e" containerID="74980a3cedccf9f132c9f95a12b67062493c4f6a163850d9655a0f17f6e60c40" exitCode=0 Dec 10 15:49:19 crc kubenswrapper[5114]: I1210 15:49:19.153208 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-clgwg" event={"ID":"6568bc5a-ae55-48c0-b351-c5fbfafc3a6e","Type":"ContainerDied","Data":"74980a3cedccf9f132c9f95a12b67062493c4f6a163850d9655a0f17f6e60c40"} Dec 10 15:49:19 crc kubenswrapper[5114]: I1210 15:49:19.383024 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-clgwg" Dec 10 15:49:19 crc kubenswrapper[5114]: I1210 15:49:19.460768 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6568bc5a-ae55-48c0-b351-c5fbfafc3a6e-catalog-content\") pod \"6568bc5a-ae55-48c0-b351-c5fbfafc3a6e\" (UID: \"6568bc5a-ae55-48c0-b351-c5fbfafc3a6e\") " Dec 10 15:49:19 crc kubenswrapper[5114]: I1210 15:49:19.460967 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frqtd\" (UniqueName: \"kubernetes.io/projected/6568bc5a-ae55-48c0-b351-c5fbfafc3a6e-kube-api-access-frqtd\") pod \"6568bc5a-ae55-48c0-b351-c5fbfafc3a6e\" (UID: \"6568bc5a-ae55-48c0-b351-c5fbfafc3a6e\") " Dec 10 15:49:19 crc kubenswrapper[5114]: I1210 15:49:19.461072 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6568bc5a-ae55-48c0-b351-c5fbfafc3a6e-utilities\") pod \"6568bc5a-ae55-48c0-b351-c5fbfafc3a6e\" (UID: \"6568bc5a-ae55-48c0-b351-c5fbfafc3a6e\") " Dec 10 15:49:19 crc kubenswrapper[5114]: I1210 15:49:19.462160 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6568bc5a-ae55-48c0-b351-c5fbfafc3a6e-utilities" (OuterVolumeSpecName: "utilities") pod "6568bc5a-ae55-48c0-b351-c5fbfafc3a6e" (UID: "6568bc5a-ae55-48c0-b351-c5fbfafc3a6e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:49:19 crc kubenswrapper[5114]: I1210 15:49:19.467004 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6568bc5a-ae55-48c0-b351-c5fbfafc3a6e-kube-api-access-frqtd" (OuterVolumeSpecName: "kube-api-access-frqtd") pod "6568bc5a-ae55-48c0-b351-c5fbfafc3a6e" (UID: "6568bc5a-ae55-48c0-b351-c5fbfafc3a6e"). InnerVolumeSpecName "kube-api-access-frqtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:49:19 crc kubenswrapper[5114]: I1210 15:49:19.514061 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6568bc5a-ae55-48c0-b351-c5fbfafc3a6e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6568bc5a-ae55-48c0-b351-c5fbfafc3a6e" (UID: "6568bc5a-ae55-48c0-b351-c5fbfafc3a6e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:49:19 crc kubenswrapper[5114]: I1210 15:49:19.562801 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6568bc5a-ae55-48c0-b351-c5fbfafc3a6e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:19 crc kubenswrapper[5114]: I1210 15:49:19.562843 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-frqtd\" (UniqueName: \"kubernetes.io/projected/6568bc5a-ae55-48c0-b351-c5fbfafc3a6e-kube-api-access-frqtd\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:19 crc kubenswrapper[5114]: I1210 15:49:19.562861 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6568bc5a-ae55-48c0-b351-c5fbfafc3a6e-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:19 crc kubenswrapper[5114]: I1210 15:49:19.719787 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-qxtmf"] Dec 10 15:49:20 crc kubenswrapper[5114]: I1210 15:49:20.167828 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-clgwg" Dec 10 15:49:20 crc kubenswrapper[5114]: I1210 15:49:20.167827 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-clgwg" event={"ID":"6568bc5a-ae55-48c0-b351-c5fbfafc3a6e","Type":"ContainerDied","Data":"294a415a4a4962292c06366d514e97e0e19a65fc335a73732416e1f29889f00f"} Dec 10 15:49:20 crc kubenswrapper[5114]: I1210 15:49:20.168293 5114 scope.go:117] "RemoveContainer" containerID="74980a3cedccf9f132c9f95a12b67062493c4f6a163850d9655a0f17f6e60c40" Dec 10 15:49:20 crc kubenswrapper[5114]: I1210 15:49:20.181537 5114 scope.go:117] "RemoveContainer" containerID="990b383fcb5773dc366e6508c76dcf7e3a8b0f2d95cfa74389333ab118dc48b4" Dec 10 15:49:20 crc kubenswrapper[5114]: I1210 15:49:20.197177 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-clgwg"] Dec 10 15:49:20 crc kubenswrapper[5114]: I1210 15:49:20.201674 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-clgwg"] Dec 10 15:49:20 crc kubenswrapper[5114]: I1210 15:49:20.203354 5114 scope.go:117] "RemoveContainer" containerID="4818d2f2859cdd56a26f14dc865a72393e3a30d570ec539c8ddd866ad8414488" Dec 10 15:49:20 crc kubenswrapper[5114]: I1210 15:49:20.578603 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6568bc5a-ae55-48c0-b351-c5fbfafc3a6e" path="/var/lib/kubelet/pods/6568bc5a-ae55-48c0-b351-c5fbfafc3a6e/volumes" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.227098 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6df9c98778-pwhd4"] Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.227950 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6df9c98778-pwhd4" podUID="02c2b89c-3f6c-4a2e-98c3-beaf70f198c8" containerName="controller-manager" containerID="cri-o://16d53276a9a0cd6a4b0cd2ad8d28730199b03cba5d3dfd7b27f172b240370e65" gracePeriod=30 Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.253737 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74b6b6789b-w8nsc"] Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.254047 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-74b6b6789b-w8nsc" podUID="67f931f3-1c81-4f43-b301-12f4a95b4e0d" containerName="route-controller-manager" containerID="cri-o://6969dfad1a6ea32259d105a1b5228ba0199ac2d4c732a4d63737387ac2065cc0" gracePeriod=30 Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.738635 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74b6b6789b-w8nsc" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.761856 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fcdb7fc5b-2thsd"] Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.762511 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6568bc5a-ae55-48c0-b351-c5fbfafc3a6e" containerName="extract-utilities" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.762531 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="6568bc5a-ae55-48c0-b351-c5fbfafc3a6e" containerName="extract-utilities" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.762543 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="949ddda2-62c3-484c-9034-3b447502cf4d" containerName="extract-content" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.762551 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="949ddda2-62c3-484c-9034-3b447502cf4d" containerName="extract-content" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.762560 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af5ea968-fe23-45bd-9ecd-8798399151e6" containerName="extract-content" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.762567 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="af5ea968-fe23-45bd-9ecd-8798399151e6" containerName="extract-content" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.762586 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="19079ac7-d811-42ee-b363-c4f37ef7e499" containerName="pruner" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.762593 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="19079ac7-d811-42ee-b363-c4f37ef7e499" containerName="pruner" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.762602 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="67f931f3-1c81-4f43-b301-12f4a95b4e0d" containerName="route-controller-manager" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.762609 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="67f931f3-1c81-4f43-b301-12f4a95b4e0d" containerName="route-controller-manager" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.762618 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d38bc69a-988a-4bdc-9141-dc5d0019908e" containerName="registry-server" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.762623 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="d38bc69a-988a-4bdc-9141-dc5d0019908e" containerName="registry-server" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.762632 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af5ea968-fe23-45bd-9ecd-8798399151e6" containerName="registry-server" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.762637 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="af5ea968-fe23-45bd-9ecd-8798399151e6" containerName="registry-server" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.762646 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af5ea968-fe23-45bd-9ecd-8798399151e6" containerName="extract-utilities" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.762652 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="af5ea968-fe23-45bd-9ecd-8798399151e6" containerName="extract-utilities" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.762661 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6568bc5a-ae55-48c0-b351-c5fbfafc3a6e" containerName="registry-server" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.762667 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="6568bc5a-ae55-48c0-b351-c5fbfafc3a6e" containerName="registry-server" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.762677 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="949ddda2-62c3-484c-9034-3b447502cf4d" containerName="registry-server" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.762684 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="949ddda2-62c3-484c-9034-3b447502cf4d" containerName="registry-server" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.762696 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="949ddda2-62c3-484c-9034-3b447502cf4d" containerName="extract-utilities" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.762702 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="949ddda2-62c3-484c-9034-3b447502cf4d" containerName="extract-utilities" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.762713 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d38bc69a-988a-4bdc-9141-dc5d0019908e" containerName="extract-content" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.762718 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="d38bc69a-988a-4bdc-9141-dc5d0019908e" containerName="extract-content" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.762729 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d38bc69a-988a-4bdc-9141-dc5d0019908e" containerName="extract-utilities" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.762734 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="d38bc69a-988a-4bdc-9141-dc5d0019908e" containerName="extract-utilities" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.762742 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6568bc5a-ae55-48c0-b351-c5fbfafc3a6e" containerName="extract-content" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.762747 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="6568bc5a-ae55-48c0-b351-c5fbfafc3a6e" containerName="extract-content" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.762854 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="19079ac7-d811-42ee-b363-c4f37ef7e499" containerName="pruner" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.762865 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="d38bc69a-988a-4bdc-9141-dc5d0019908e" containerName="registry-server" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.762872 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="af5ea968-fe23-45bd-9ecd-8798399151e6" containerName="registry-server" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.762879 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="949ddda2-62c3-484c-9034-3b447502cf4d" containerName="registry-server" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.762887 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="6568bc5a-ae55-48c0-b351-c5fbfafc3a6e" containerName="registry-server" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.762892 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="67f931f3-1c81-4f43-b301-12f4a95b4e0d" containerName="route-controller-manager" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.764433 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s5g2\" (UniqueName: \"kubernetes.io/projected/67f931f3-1c81-4f43-b301-12f4a95b4e0d-kube-api-access-4s5g2\") pod \"67f931f3-1c81-4f43-b301-12f4a95b4e0d\" (UID: \"67f931f3-1c81-4f43-b301-12f4a95b4e0d\") " Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.764569 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67f931f3-1c81-4f43-b301-12f4a95b4e0d-serving-cert\") pod \"67f931f3-1c81-4f43-b301-12f4a95b4e0d\" (UID: \"67f931f3-1c81-4f43-b301-12f4a95b4e0d\") " Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.765488 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67f931f3-1c81-4f43-b301-12f4a95b4e0d-config\") pod \"67f931f3-1c81-4f43-b301-12f4a95b4e0d\" (UID: \"67f931f3-1c81-4f43-b301-12f4a95b4e0d\") " Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.765527 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67f931f3-1c81-4f43-b301-12f4a95b4e0d-client-ca\") pod \"67f931f3-1c81-4f43-b301-12f4a95b4e0d\" (UID: \"67f931f3-1c81-4f43-b301-12f4a95b4e0d\") " Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.765566 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/67f931f3-1c81-4f43-b301-12f4a95b4e0d-tmp\") pod \"67f931f3-1c81-4f43-b301-12f4a95b4e0d\" (UID: \"67f931f3-1c81-4f43-b301-12f4a95b4e0d\") " Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.766265 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67f931f3-1c81-4f43-b301-12f4a95b4e0d-tmp" (OuterVolumeSpecName: "tmp") pod "67f931f3-1c81-4f43-b301-12f4a95b4e0d" (UID: "67f931f3-1c81-4f43-b301-12f4a95b4e0d"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.766497 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67f931f3-1c81-4f43-b301-12f4a95b4e0d-client-ca" (OuterVolumeSpecName: "client-ca") pod "67f931f3-1c81-4f43-b301-12f4a95b4e0d" (UID: "67f931f3-1c81-4f43-b301-12f4a95b4e0d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.766543 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67f931f3-1c81-4f43-b301-12f4a95b4e0d-config" (OuterVolumeSpecName: "config") pod "67f931f3-1c81-4f43-b301-12f4a95b4e0d" (UID: "67f931f3-1c81-4f43-b301-12f4a95b4e0d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.767798 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fcdb7fc5b-2thsd" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.771071 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67f931f3-1c81-4f43-b301-12f4a95b4e0d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "67f931f3-1c81-4f43-b301-12f4a95b4e0d" (UID: "67f931f3-1c81-4f43-b301-12f4a95b4e0d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.771506 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67f931f3-1c81-4f43-b301-12f4a95b4e0d-kube-api-access-4s5g2" (OuterVolumeSpecName: "kube-api-access-4s5g2") pod "67f931f3-1c81-4f43-b301-12f4a95b4e0d" (UID: "67f931f3-1c81-4f43-b301-12f4a95b4e0d"). InnerVolumeSpecName "kube-api-access-4s5g2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.776518 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fcdb7fc5b-2thsd"] Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.868196 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z74db\" (UniqueName: \"kubernetes.io/projected/a9f54733-5e32-42a4-9b3c-5545471995a4-kube-api-access-z74db\") pod \"route-controller-manager-7fcdb7fc5b-2thsd\" (UID: \"a9f54733-5e32-42a4-9b3c-5545471995a4\") " pod="openshift-route-controller-manager/route-controller-manager-7fcdb7fc5b-2thsd" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.869046 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9f54733-5e32-42a4-9b3c-5545471995a4-serving-cert\") pod \"route-controller-manager-7fcdb7fc5b-2thsd\" (UID: \"a9f54733-5e32-42a4-9b3c-5545471995a4\") " pod="openshift-route-controller-manager/route-controller-manager-7fcdb7fc5b-2thsd" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.869126 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9f54733-5e32-42a4-9b3c-5545471995a4-config\") pod \"route-controller-manager-7fcdb7fc5b-2thsd\" (UID: \"a9f54733-5e32-42a4-9b3c-5545471995a4\") " pod="openshift-route-controller-manager/route-controller-manager-7fcdb7fc5b-2thsd" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.869333 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9f54733-5e32-42a4-9b3c-5545471995a4-client-ca\") pod \"route-controller-manager-7fcdb7fc5b-2thsd\" (UID: \"a9f54733-5e32-42a4-9b3c-5545471995a4\") " pod="openshift-route-controller-manager/route-controller-manager-7fcdb7fc5b-2thsd" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.869923 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a9f54733-5e32-42a4-9b3c-5545471995a4-tmp\") pod \"route-controller-manager-7fcdb7fc5b-2thsd\" (UID: \"a9f54733-5e32-42a4-9b3c-5545471995a4\") " pod="openshift-route-controller-manager/route-controller-manager-7fcdb7fc5b-2thsd" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.870075 5114 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/67f931f3-1c81-4f43-b301-12f4a95b4e0d-tmp\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.870311 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4s5g2\" (UniqueName: \"kubernetes.io/projected/67f931f3-1c81-4f43-b301-12f4a95b4e0d-kube-api-access-4s5g2\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.870406 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67f931f3-1c81-4f43-b301-12f4a95b4e0d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.871076 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67f931f3-1c81-4f43-b301-12f4a95b4e0d-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.871223 5114 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67f931f3-1c81-4f43-b301-12f4a95b4e0d-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.895981 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6df9c98778-pwhd4" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.919160 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs"] Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.919738 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="02c2b89c-3f6c-4a2e-98c3-beaf70f198c8" containerName="controller-manager" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.919753 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c2b89c-3f6c-4a2e-98c3-beaf70f198c8" containerName="controller-manager" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.919829 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="02c2b89c-3f6c-4a2e-98c3-beaf70f198c8" containerName="controller-manager" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.926826 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.969522 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs"] Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.971887 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9f54733-5e32-42a4-9b3c-5545471995a4-client-ca\") pod \"route-controller-manager-7fcdb7fc5b-2thsd\" (UID: \"a9f54733-5e32-42a4-9b3c-5545471995a4\") " pod="openshift-route-controller-manager/route-controller-manager-7fcdb7fc5b-2thsd" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.971958 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a9f54733-5e32-42a4-9b3c-5545471995a4-tmp\") pod \"route-controller-manager-7fcdb7fc5b-2thsd\" (UID: \"a9f54733-5e32-42a4-9b3c-5545471995a4\") " pod="openshift-route-controller-manager/route-controller-manager-7fcdb7fc5b-2thsd" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.971982 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z74db\" (UniqueName: \"kubernetes.io/projected/a9f54733-5e32-42a4-9b3c-5545471995a4-kube-api-access-z74db\") pod \"route-controller-manager-7fcdb7fc5b-2thsd\" (UID: \"a9f54733-5e32-42a4-9b3c-5545471995a4\") " pod="openshift-route-controller-manager/route-controller-manager-7fcdb7fc5b-2thsd" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.972009 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9f54733-5e32-42a4-9b3c-5545471995a4-serving-cert\") pod \"route-controller-manager-7fcdb7fc5b-2thsd\" (UID: \"a9f54733-5e32-42a4-9b3c-5545471995a4\") " pod="openshift-route-controller-manager/route-controller-manager-7fcdb7fc5b-2thsd" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.972025 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9f54733-5e32-42a4-9b3c-5545471995a4-config\") pod \"route-controller-manager-7fcdb7fc5b-2thsd\" (UID: \"a9f54733-5e32-42a4-9b3c-5545471995a4\") " pod="openshift-route-controller-manager/route-controller-manager-7fcdb7fc5b-2thsd" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.973043 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9f54733-5e32-42a4-9b3c-5545471995a4-config\") pod \"route-controller-manager-7fcdb7fc5b-2thsd\" (UID: \"a9f54733-5e32-42a4-9b3c-5545471995a4\") " pod="openshift-route-controller-manager/route-controller-manager-7fcdb7fc5b-2thsd" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.973808 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9f54733-5e32-42a4-9b3c-5545471995a4-client-ca\") pod \"route-controller-manager-7fcdb7fc5b-2thsd\" (UID: \"a9f54733-5e32-42a4-9b3c-5545471995a4\") " pod="openshift-route-controller-manager/route-controller-manager-7fcdb7fc5b-2thsd" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.974063 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a9f54733-5e32-42a4-9b3c-5545471995a4-tmp\") pod \"route-controller-manager-7fcdb7fc5b-2thsd\" (UID: \"a9f54733-5e32-42a4-9b3c-5545471995a4\") " pod="openshift-route-controller-manager/route-controller-manager-7fcdb7fc5b-2thsd" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.979133 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9f54733-5e32-42a4-9b3c-5545471995a4-serving-cert\") pod \"route-controller-manager-7fcdb7fc5b-2thsd\" (UID: \"a9f54733-5e32-42a4-9b3c-5545471995a4\") " pod="openshift-route-controller-manager/route-controller-manager-7fcdb7fc5b-2thsd" Dec 10 15:49:36 crc kubenswrapper[5114]: I1210 15:49:36.989834 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z74db\" (UniqueName: \"kubernetes.io/projected/a9f54733-5e32-42a4-9b3c-5545471995a4-kube-api-access-z74db\") pod \"route-controller-manager-7fcdb7fc5b-2thsd\" (UID: \"a9f54733-5e32-42a4-9b3c-5545471995a4\") " pod="openshift-route-controller-manager/route-controller-manager-7fcdb7fc5b-2thsd" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.073492 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-proxy-ca-bundles\") pod \"02c2b89c-3f6c-4a2e-98c3-beaf70f198c8\" (UID: \"02c2b89c-3f6c-4a2e-98c3-beaf70f198c8\") " Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.073881 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-config\") pod \"02c2b89c-3f6c-4a2e-98c3-beaf70f198c8\" (UID: \"02c2b89c-3f6c-4a2e-98c3-beaf70f198c8\") " Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.073949 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-tmp\") pod \"02c2b89c-3f6c-4a2e-98c3-beaf70f198c8\" (UID: \"02c2b89c-3f6c-4a2e-98c3-beaf70f198c8\") " Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.073985 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-client-ca\") pod \"02c2b89c-3f6c-4a2e-98c3-beaf70f198c8\" (UID: \"02c2b89c-3f6c-4a2e-98c3-beaf70f198c8\") " Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.074030 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-serving-cert\") pod \"02c2b89c-3f6c-4a2e-98c3-beaf70f198c8\" (UID: \"02c2b89c-3f6c-4a2e-98c3-beaf70f198c8\") " Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.074070 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7d8hw\" (UniqueName: \"kubernetes.io/projected/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-kube-api-access-7d8hw\") pod \"02c2b89c-3f6c-4a2e-98c3-beaf70f198c8\" (UID: \"02c2b89c-3f6c-4a2e-98c3-beaf70f198c8\") " Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.074242 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d238433-d5ee-408e-82a2-79db77556083-client-ca\") pod \"controller-manager-5f8dcf6c95-hrkgs\" (UID: \"8d238433-d5ee-408e-82a2-79db77556083\") " pod="openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.074256 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-tmp" (OuterVolumeSpecName: "tmp") pod "02c2b89c-3f6c-4a2e-98c3-beaf70f198c8" (UID: "02c2b89c-3f6c-4a2e-98c3-beaf70f198c8"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.074389 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnwzm\" (UniqueName: \"kubernetes.io/projected/8d238433-d5ee-408e-82a2-79db77556083-kube-api-access-jnwzm\") pod \"controller-manager-5f8dcf6c95-hrkgs\" (UID: \"8d238433-d5ee-408e-82a2-79db77556083\") " pod="openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.074433 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8d238433-d5ee-408e-82a2-79db77556083-tmp\") pod \"controller-manager-5f8dcf6c95-hrkgs\" (UID: \"8d238433-d5ee-408e-82a2-79db77556083\") " pod="openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.074462 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d238433-d5ee-408e-82a2-79db77556083-config\") pod \"controller-manager-5f8dcf6c95-hrkgs\" (UID: \"8d238433-d5ee-408e-82a2-79db77556083\") " pod="openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.074504 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d238433-d5ee-408e-82a2-79db77556083-serving-cert\") pod \"controller-manager-5f8dcf6c95-hrkgs\" (UID: \"8d238433-d5ee-408e-82a2-79db77556083\") " pod="openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.074547 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d238433-d5ee-408e-82a2-79db77556083-proxy-ca-bundles\") pod \"controller-manager-5f8dcf6c95-hrkgs\" (UID: \"8d238433-d5ee-408e-82a2-79db77556083\") " pod="openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.074609 5114 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-tmp\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.074681 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-client-ca" (OuterVolumeSpecName: "client-ca") pod "02c2b89c-3f6c-4a2e-98c3-beaf70f198c8" (UID: "02c2b89c-3f6c-4a2e-98c3-beaf70f198c8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.074714 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-config" (OuterVolumeSpecName: "config") pod "02c2b89c-3f6c-4a2e-98c3-beaf70f198c8" (UID: "02c2b89c-3f6c-4a2e-98c3-beaf70f198c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.074833 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "02c2b89c-3f6c-4a2e-98c3-beaf70f198c8" (UID: "02c2b89c-3f6c-4a2e-98c3-beaf70f198c8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.077040 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "02c2b89c-3f6c-4a2e-98c3-beaf70f198c8" (UID: "02c2b89c-3f6c-4a2e-98c3-beaf70f198c8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.078040 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-kube-api-access-7d8hw" (OuterVolumeSpecName: "kube-api-access-7d8hw") pod "02c2b89c-3f6c-4a2e-98c3-beaf70f198c8" (UID: "02c2b89c-3f6c-4a2e-98c3-beaf70f198c8"). InnerVolumeSpecName "kube-api-access-7d8hw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.114471 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fcdb7fc5b-2thsd" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.176282 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d238433-d5ee-408e-82a2-79db77556083-client-ca\") pod \"controller-manager-5f8dcf6c95-hrkgs\" (UID: \"8d238433-d5ee-408e-82a2-79db77556083\") " pod="openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.176389 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jnwzm\" (UniqueName: \"kubernetes.io/projected/8d238433-d5ee-408e-82a2-79db77556083-kube-api-access-jnwzm\") pod \"controller-manager-5f8dcf6c95-hrkgs\" (UID: \"8d238433-d5ee-408e-82a2-79db77556083\") " pod="openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.176415 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8d238433-d5ee-408e-82a2-79db77556083-tmp\") pod \"controller-manager-5f8dcf6c95-hrkgs\" (UID: \"8d238433-d5ee-408e-82a2-79db77556083\") " pod="openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.176437 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d238433-d5ee-408e-82a2-79db77556083-config\") pod \"controller-manager-5f8dcf6c95-hrkgs\" (UID: \"8d238433-d5ee-408e-82a2-79db77556083\") " pod="openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.176458 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d238433-d5ee-408e-82a2-79db77556083-serving-cert\") pod \"controller-manager-5f8dcf6c95-hrkgs\" (UID: \"8d238433-d5ee-408e-82a2-79db77556083\") " pod="openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.176475 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d238433-d5ee-408e-82a2-79db77556083-proxy-ca-bundles\") pod \"controller-manager-5f8dcf6c95-hrkgs\" (UID: \"8d238433-d5ee-408e-82a2-79db77556083\") " pod="openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.176544 5114 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.176554 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.176562 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7d8hw\" (UniqueName: \"kubernetes.io/projected/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-kube-api-access-7d8hw\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.176571 5114 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.176579 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.177608 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d238433-d5ee-408e-82a2-79db77556083-proxy-ca-bundles\") pod \"controller-manager-5f8dcf6c95-hrkgs\" (UID: \"8d238433-d5ee-408e-82a2-79db77556083\") " pod="openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.178190 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d238433-d5ee-408e-82a2-79db77556083-client-ca\") pod \"controller-manager-5f8dcf6c95-hrkgs\" (UID: \"8d238433-d5ee-408e-82a2-79db77556083\") " pod="openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.178263 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8d238433-d5ee-408e-82a2-79db77556083-tmp\") pod \"controller-manager-5f8dcf6c95-hrkgs\" (UID: \"8d238433-d5ee-408e-82a2-79db77556083\") " pod="openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.179148 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d238433-d5ee-408e-82a2-79db77556083-config\") pod \"controller-manager-5f8dcf6c95-hrkgs\" (UID: \"8d238433-d5ee-408e-82a2-79db77556083\") " pod="openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.182505 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d238433-d5ee-408e-82a2-79db77556083-serving-cert\") pod \"controller-manager-5f8dcf6c95-hrkgs\" (UID: \"8d238433-d5ee-408e-82a2-79db77556083\") " pod="openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.203076 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnwzm\" (UniqueName: \"kubernetes.io/projected/8d238433-d5ee-408e-82a2-79db77556083-kube-api-access-jnwzm\") pod \"controller-manager-5f8dcf6c95-hrkgs\" (UID: \"8d238433-d5ee-408e-82a2-79db77556083\") " pod="openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.264028 5114 generic.go:358] "Generic (PLEG): container finished" podID="67f931f3-1c81-4f43-b301-12f4a95b4e0d" containerID="6969dfad1a6ea32259d105a1b5228ba0199ac2d4c732a4d63737387ac2065cc0" exitCode=0 Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.264150 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-74b6b6789b-w8nsc" event={"ID":"67f931f3-1c81-4f43-b301-12f4a95b4e0d","Type":"ContainerDied","Data":"6969dfad1a6ea32259d105a1b5228ba0199ac2d4c732a4d63737387ac2065cc0"} Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.264175 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-74b6b6789b-w8nsc" event={"ID":"67f931f3-1c81-4f43-b301-12f4a95b4e0d","Type":"ContainerDied","Data":"8e2969fe35dd23a613cf8ccbb1f9a162fdd15b3396ffb363aab3218691792b3e"} Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.264191 5114 scope.go:117] "RemoveContainer" containerID="6969dfad1a6ea32259d105a1b5228ba0199ac2d4c732a4d63737387ac2065cc0" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.264368 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74b6b6789b-w8nsc" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.266978 5114 generic.go:358] "Generic (PLEG): container finished" podID="02c2b89c-3f6c-4a2e-98c3-beaf70f198c8" containerID="16d53276a9a0cd6a4b0cd2ad8d28730199b03cba5d3dfd7b27f172b240370e65" exitCode=0 Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.267010 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6df9c98778-pwhd4" event={"ID":"02c2b89c-3f6c-4a2e-98c3-beaf70f198c8","Type":"ContainerDied","Data":"16d53276a9a0cd6a4b0cd2ad8d28730199b03cba5d3dfd7b27f172b240370e65"} Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.267034 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6df9c98778-pwhd4" event={"ID":"02c2b89c-3f6c-4a2e-98c3-beaf70f198c8","Type":"ContainerDied","Data":"27bbfbf720aeaff6bc3ba3481772a7a37ce5d254d0e8b016560a9b8542246414"} Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.267099 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6df9c98778-pwhd4" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.280501 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.283959 5114 scope.go:117] "RemoveContainer" containerID="6969dfad1a6ea32259d105a1b5228ba0199ac2d4c732a4d63737387ac2065cc0" Dec 10 15:49:37 crc kubenswrapper[5114]: E1210 15:49:37.286298 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6969dfad1a6ea32259d105a1b5228ba0199ac2d4c732a4d63737387ac2065cc0\": container with ID starting with 6969dfad1a6ea32259d105a1b5228ba0199ac2d4c732a4d63737387ac2065cc0 not found: ID does not exist" containerID="6969dfad1a6ea32259d105a1b5228ba0199ac2d4c732a4d63737387ac2065cc0" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.286340 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6969dfad1a6ea32259d105a1b5228ba0199ac2d4c732a4d63737387ac2065cc0"} err="failed to get container status \"6969dfad1a6ea32259d105a1b5228ba0199ac2d4c732a4d63737387ac2065cc0\": rpc error: code = NotFound desc = could not find container \"6969dfad1a6ea32259d105a1b5228ba0199ac2d4c732a4d63737387ac2065cc0\": container with ID starting with 6969dfad1a6ea32259d105a1b5228ba0199ac2d4c732a4d63737387ac2065cc0 not found: ID does not exist" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.286366 5114 scope.go:117] "RemoveContainer" containerID="16d53276a9a0cd6a4b0cd2ad8d28730199b03cba5d3dfd7b27f172b240370e65" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.296633 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fcdb7fc5b-2thsd"] Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.298422 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6df9c98778-pwhd4"] Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.300916 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6df9c98778-pwhd4"] Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.310139 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74b6b6789b-w8nsc"] Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.312722 5114 scope.go:117] "RemoveContainer" containerID="16d53276a9a0cd6a4b0cd2ad8d28730199b03cba5d3dfd7b27f172b240370e65" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.313032 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74b6b6789b-w8nsc"] Dec 10 15:49:37 crc kubenswrapper[5114]: E1210 15:49:37.313250 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16d53276a9a0cd6a4b0cd2ad8d28730199b03cba5d3dfd7b27f172b240370e65\": container with ID starting with 16d53276a9a0cd6a4b0cd2ad8d28730199b03cba5d3dfd7b27f172b240370e65 not found: ID does not exist" containerID="16d53276a9a0cd6a4b0cd2ad8d28730199b03cba5d3dfd7b27f172b240370e65" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.313324 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16d53276a9a0cd6a4b0cd2ad8d28730199b03cba5d3dfd7b27f172b240370e65"} err="failed to get container status \"16d53276a9a0cd6a4b0cd2ad8d28730199b03cba5d3dfd7b27f172b240370e65\": rpc error: code = NotFound desc = could not find container \"16d53276a9a0cd6a4b0cd2ad8d28730199b03cba5d3dfd7b27f172b240370e65\": container with ID starting with 16d53276a9a0cd6a4b0cd2ad8d28730199b03cba5d3dfd7b27f172b240370e65 not found: ID does not exist" Dec 10 15:49:37 crc kubenswrapper[5114]: I1210 15:49:37.459465 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs"] Dec 10 15:49:38 crc kubenswrapper[5114]: I1210 15:49:38.274907 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs" event={"ID":"8d238433-d5ee-408e-82a2-79db77556083","Type":"ContainerStarted","Data":"3e26606249e443f3b9c301d5e313e07afa5c59a76fbdb738ff033fd54687c0e1"} Dec 10 15:49:38 crc kubenswrapper[5114]: I1210 15:49:38.275263 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs" event={"ID":"8d238433-d5ee-408e-82a2-79db77556083","Type":"ContainerStarted","Data":"f9cec6be2280639a5a788e6c822385396333c70a4224a3fd4cf8bd983549a2fe"} Dec 10 15:49:38 crc kubenswrapper[5114]: I1210 15:49:38.275299 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs" Dec 10 15:49:38 crc kubenswrapper[5114]: I1210 15:49:38.279475 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fcdb7fc5b-2thsd" event={"ID":"a9f54733-5e32-42a4-9b3c-5545471995a4","Type":"ContainerStarted","Data":"8c56294a15dc2a6312e0fe763aeb6ce8c01bb15624905311a8f3082ebbd30009"} Dec 10 15:49:38 crc kubenswrapper[5114]: I1210 15:49:38.279503 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fcdb7fc5b-2thsd" event={"ID":"a9f54733-5e32-42a4-9b3c-5545471995a4","Type":"ContainerStarted","Data":"3d685e0d6ecd26fc79d758e6b1dcf84b7ac3a1ac5e12184a984dc49ab1e0c8fa"} Dec 10 15:49:38 crc kubenswrapper[5114]: I1210 15:49:38.279727 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-7fcdb7fc5b-2thsd" Dec 10 15:49:38 crc kubenswrapper[5114]: I1210 15:49:38.283458 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs" Dec 10 15:49:38 crc kubenswrapper[5114]: I1210 15:49:38.285306 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7fcdb7fc5b-2thsd" Dec 10 15:49:38 crc kubenswrapper[5114]: I1210 15:49:38.292044 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs" podStartSLOduration=2.292027845 podStartE2EDuration="2.292027845s" podCreationTimestamp="2025-12-10 15:49:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:49:38.290439715 +0000 UTC m=+204.011240892" watchObservedRunningTime="2025-12-10 15:49:38.292027845 +0000 UTC m=+204.012829022" Dec 10 15:49:38 crc kubenswrapper[5114]: I1210 15:49:38.329242 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7fcdb7fc5b-2thsd" podStartSLOduration=2.329217815 podStartE2EDuration="2.329217815s" podCreationTimestamp="2025-12-10 15:49:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:49:38.326532747 +0000 UTC m=+204.047333934" watchObservedRunningTime="2025-12-10 15:49:38.329217815 +0000 UTC m=+204.050018992" Dec 10 15:49:38 crc kubenswrapper[5114]: I1210 15:49:38.575191 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02c2b89c-3f6c-4a2e-98c3-beaf70f198c8" path="/var/lib/kubelet/pods/02c2b89c-3f6c-4a2e-98c3-beaf70f198c8/volumes" Dec 10 15:49:38 crc kubenswrapper[5114]: I1210 15:49:38.575888 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67f931f3-1c81-4f43-b301-12f4a95b4e0d" path="/var/lib/kubelet/pods/67f931f3-1c81-4f43-b301-12f4a95b4e0d/volumes" Dec 10 15:49:44 crc kubenswrapper[5114]: I1210 15:49:44.755946 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" podUID="8803937b-0d28-40bc-bdb9-12ea0b8d003c" containerName="oauth-openshift" containerID="cri-o://f745908e6b8dda6f3c2e1811af9aefcaebdda4c5576ef4e05c4dfb6f3f5c1c3f" gracePeriod=15 Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.083519 5114 ???:1] "http: TLS handshake error from 192.168.126.11:34342: no serving certificate available for the kubelet" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.209302 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.255298 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg"] Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.256069 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8803937b-0d28-40bc-bdb9-12ea0b8d003c" containerName="oauth-openshift" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.256088 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="8803937b-0d28-40bc-bdb9-12ea0b8d003c" containerName="oauth-openshift" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.256192 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="8803937b-0d28-40bc-bdb9-12ea0b8d003c" containerName="oauth-openshift" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.262533 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg"] Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.262703 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.277053 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8803937b-0d28-40bc-bdb9-12ea0b8d003c-audit-dir\") pod \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.277132 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-service-ca\") pod \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.277187 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-user-template-login\") pod \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.277254 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-user-template-provider-selection\") pod \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.277177 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8803937b-0d28-40bc-bdb9-12ea0b8d003c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "8803937b-0d28-40bc-bdb9-12ea0b8d003c" (UID: "8803937b-0d28-40bc-bdb9-12ea0b8d003c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.277344 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-cliconfig\") pod \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.277384 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bg7wd\" (UniqueName: \"kubernetes.io/projected/8803937b-0d28-40bc-bdb9-12ea0b8d003c-kube-api-access-bg7wd\") pod \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.277454 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8803937b-0d28-40bc-bdb9-12ea0b8d003c-audit-policies\") pod \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.277511 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-router-certs\") pod \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.277579 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-ocp-branding-template\") pod \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.277643 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-serving-cert\") pod \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.277667 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "8803937b-0d28-40bc-bdb9-12ea0b8d003c" (UID: "8803937b-0d28-40bc-bdb9-12ea0b8d003c"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.277702 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-session\") pod \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.277717 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "8803937b-0d28-40bc-bdb9-12ea0b8d003c" (UID: "8803937b-0d28-40bc-bdb9-12ea0b8d003c"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.277799 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-user-idp-0-file-data\") pod \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.277850 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-trusted-ca-bundle\") pod \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.277909 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-user-template-error\") pod \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\" (UID: \"8803937b-0d28-40bc-bdb9-12ea0b8d003c\") " Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.278551 5114 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8803937b-0d28-40bc-bdb9-12ea0b8d003c-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.278585 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.278607 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.280015 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8803937b-0d28-40bc-bdb9-12ea0b8d003c-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "8803937b-0d28-40bc-bdb9-12ea0b8d003c" (UID: "8803937b-0d28-40bc-bdb9-12ea0b8d003c"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.286513 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "8803937b-0d28-40bc-bdb9-12ea0b8d003c" (UID: "8803937b-0d28-40bc-bdb9-12ea0b8d003c"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.287254 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "8803937b-0d28-40bc-bdb9-12ea0b8d003c" (UID: "8803937b-0d28-40bc-bdb9-12ea0b8d003c"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.288589 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "8803937b-0d28-40bc-bdb9-12ea0b8d003c" (UID: "8803937b-0d28-40bc-bdb9-12ea0b8d003c"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.288784 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8803937b-0d28-40bc-bdb9-12ea0b8d003c-kube-api-access-bg7wd" (OuterVolumeSpecName: "kube-api-access-bg7wd") pod "8803937b-0d28-40bc-bdb9-12ea0b8d003c" (UID: "8803937b-0d28-40bc-bdb9-12ea0b8d003c"). InnerVolumeSpecName "kube-api-access-bg7wd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.289754 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "8803937b-0d28-40bc-bdb9-12ea0b8d003c" (UID: "8803937b-0d28-40bc-bdb9-12ea0b8d003c"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.291664 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "8803937b-0d28-40bc-bdb9-12ea0b8d003c" (UID: "8803937b-0d28-40bc-bdb9-12ea0b8d003c"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.291995 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "8803937b-0d28-40bc-bdb9-12ea0b8d003c" (UID: "8803937b-0d28-40bc-bdb9-12ea0b8d003c"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.293538 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "8803937b-0d28-40bc-bdb9-12ea0b8d003c" (UID: "8803937b-0d28-40bc-bdb9-12ea0b8d003c"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.298579 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "8803937b-0d28-40bc-bdb9-12ea0b8d003c" (UID: "8803937b-0d28-40bc-bdb9-12ea0b8d003c"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.298738 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "8803937b-0d28-40bc-bdb9-12ea0b8d003c" (UID: "8803937b-0d28-40bc-bdb9-12ea0b8d003c"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.317633 5114 generic.go:358] "Generic (PLEG): container finished" podID="8803937b-0d28-40bc-bdb9-12ea0b8d003c" containerID="f745908e6b8dda6f3c2e1811af9aefcaebdda4c5576ef4e05c4dfb6f3f5c1c3f" exitCode=0 Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.317691 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" event={"ID":"8803937b-0d28-40bc-bdb9-12ea0b8d003c","Type":"ContainerDied","Data":"f745908e6b8dda6f3c2e1811af9aefcaebdda4c5576ef4e05c4dfb6f3f5c1c3f"} Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.317717 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" event={"ID":"8803937b-0d28-40bc-bdb9-12ea0b8d003c","Type":"ContainerDied","Data":"def0380f8bc9f779f38e6fbb9252e6f286c486d2e6a4555e8a946ed6dea3f9be"} Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.317733 5114 scope.go:117] "RemoveContainer" containerID="f745908e6b8dda6f3c2e1811af9aefcaebdda4c5576ef4e05c4dfb6f3f5c1c3f" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.317850 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-qxtmf" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.340934 5114 scope.go:117] "RemoveContainer" containerID="f745908e6b8dda6f3c2e1811af9aefcaebdda4c5576ef4e05c4dfb6f3f5c1c3f" Dec 10 15:49:45 crc kubenswrapper[5114]: E1210 15:49:45.342378 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f745908e6b8dda6f3c2e1811af9aefcaebdda4c5576ef4e05c4dfb6f3f5c1c3f\": container with ID starting with f745908e6b8dda6f3c2e1811af9aefcaebdda4c5576ef4e05c4dfb6f3f5c1c3f not found: ID does not exist" containerID="f745908e6b8dda6f3c2e1811af9aefcaebdda4c5576ef4e05c4dfb6f3f5c1c3f" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.342423 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f745908e6b8dda6f3c2e1811af9aefcaebdda4c5576ef4e05c4dfb6f3f5c1c3f"} err="failed to get container status \"f745908e6b8dda6f3c2e1811af9aefcaebdda4c5576ef4e05c4dfb6f3f5c1c3f\": rpc error: code = NotFound desc = could not find container \"f745908e6b8dda6f3c2e1811af9aefcaebdda4c5576ef4e05c4dfb6f3f5c1c3f\": container with ID starting with f745908e6b8dda6f3c2e1811af9aefcaebdda4c5576ef4e05c4dfb6f3f5c1c3f not found: ID does not exist" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.349422 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-qxtmf"] Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.352560 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-qxtmf"] Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.380168 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-v4-0-config-system-session\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.380228 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgs8h\" (UniqueName: \"kubernetes.io/projected/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-kube-api-access-tgs8h\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.380261 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-v4-0-config-user-template-login\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.380310 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.380329 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.380349 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.380368 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-v4-0-config-system-service-ca\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.380385 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.380416 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-audit-dir\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.380433 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.380505 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-v4-0-config-system-router-certs\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.380537 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-audit-policies\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.380558 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-v4-0-config-user-template-error\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.380582 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.380777 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.380802 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.380816 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.380828 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.380838 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.380849 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.380859 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.380870 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.380881 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bg7wd\" (UniqueName: \"kubernetes.io/projected/8803937b-0d28-40bc-bdb9-12ea0b8d003c-kube-api-access-bg7wd\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.380891 5114 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8803937b-0d28-40bc-bdb9-12ea0b8d003c-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.380900 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8803937b-0d28-40bc-bdb9-12ea0b8d003c-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.482240 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.482585 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.482679 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.482774 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-v4-0-config-system-service-ca\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.482870 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.482958 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-audit-dir\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.483037 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.483131 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-v4-0-config-system-router-certs\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.483214 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-audit-policies\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.483317 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-v4-0-config-user-template-error\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.483401 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.483514 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-v4-0-config-system-session\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.483603 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tgs8h\" (UniqueName: \"kubernetes.io/projected/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-kube-api-access-tgs8h\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.483707 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.483711 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-v4-0-config-user-template-login\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.484029 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-audit-policies\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.483030 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-audit-dir\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.484606 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.484785 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-v4-0-config-system-service-ca\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.486925 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-v4-0-config-user-template-login\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.486940 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.487183 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-v4-0-config-system-session\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.487434 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.488072 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-v4-0-config-user-template-error\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.488176 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.488298 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.488341 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-v4-0-config-system-router-certs\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.507431 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgs8h\" (UniqueName: \"kubernetes.io/projected/2f4f6024-44fb-4000-bff7-8e2f774dc5cb-kube-api-access-tgs8h\") pod \"oauth-openshift-5dc4577bbb-vxzdg\" (UID: \"2f4f6024-44fb-4000-bff7-8e2f774dc5cb\") " pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:45 crc kubenswrapper[5114]: I1210 15:49:45.587081 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:46 crc kubenswrapper[5114]: I1210 15:49:46.001521 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg"] Dec 10 15:49:46 crc kubenswrapper[5114]: I1210 15:49:46.329877 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" event={"ID":"2f4f6024-44fb-4000-bff7-8e2f774dc5cb","Type":"ContainerStarted","Data":"5f804d0c8e4149ed94bd7d51dd7f4d1860ea966a628ac59f39808304b3074e7f"} Dec 10 15:49:46 crc kubenswrapper[5114]: I1210 15:49:46.330317 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" event={"ID":"2f4f6024-44fb-4000-bff7-8e2f774dc5cb","Type":"ContainerStarted","Data":"07c30dad40d228a8bf562d5a10afca238f6b3fab930d4e20c3879ad3d49602de"} Dec 10 15:49:46 crc kubenswrapper[5114]: I1210 15:49:46.330333 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:46 crc kubenswrapper[5114]: I1210 15:49:46.332961 5114 patch_prober.go:28] interesting pod/oauth-openshift-5dc4577bbb-vxzdg container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.65:6443/healthz\": dial tcp 10.217.0.65:6443: connect: connection refused" start-of-body= Dec 10 15:49:46 crc kubenswrapper[5114]: I1210 15:49:46.333016 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" podUID="2f4f6024-44fb-4000-bff7-8e2f774dc5cb" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.65:6443/healthz\": dial tcp 10.217.0.65:6443: connect: connection refused" Dec 10 15:49:46 crc kubenswrapper[5114]: I1210 15:49:46.574721 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8803937b-0d28-40bc-bdb9-12ea0b8d003c" path="/var/lib/kubelet/pods/8803937b-0d28-40bc-bdb9-12ea0b8d003c/volumes" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.345164 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.371703 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5dc4577bbb-vxzdg" podStartSLOduration=28.371681436 podStartE2EDuration="28.371681436s" podCreationTimestamp="2025-12-10 15:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:49:46.357741382 +0000 UTC m=+212.078542559" watchObservedRunningTime="2025-12-10 15:49:47.371681436 +0000 UTC m=+213.092482623" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.745856 5114 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.778713 5114 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.778762 5114 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.778920 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.779231 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://c9a7475ba48862dfcb11fe65264384be264b4b7acd30761bc650e70dd3a78abb" gracePeriod=15 Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.779308 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://d79fc0ad78427693b9ef01519261c475c49b29ab8dc64210c09f22886b3dcfad" gracePeriod=15 Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.779332 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://7398b71862f7cfabefc5644c5d6b4924bbde47edadad7f240aa37599d2b3da9d" gracePeriod=15 Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.779342 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://0f8dd78b836cacc6ac7bee1a11730500c94192df5a045eb37ae1c137a3cc0ad6" gracePeriod=15 Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.779400 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://55ad03eb1a337191c414a5dbd0864a29632396ff234b68505a9a4b65c90d8eb5" gracePeriod=15 Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.779889 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.779922 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.779936 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.779942 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.779948 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.779954 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.779967 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.779972 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.779979 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.779984 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.780000 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.780006 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.780012 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.780018 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.780028 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.780033 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.780111 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.780120 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.780127 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.780137 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.780143 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.780149 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.780157 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.780262 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.780290 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.780407 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.793150 5114 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.798316 5114 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.808733 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.924434 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.924790 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.924813 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.924850 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.924881 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.924913 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.924946 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.924986 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.925005 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:49:47 crc kubenswrapper[5114]: I1210 15:49:47.925026 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:49:48 crc kubenswrapper[5114]: I1210 15:49:48.026230 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:49:48 crc kubenswrapper[5114]: I1210 15:49:48.026350 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:49:48 crc kubenswrapper[5114]: I1210 15:49:48.026375 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:49:48 crc kubenswrapper[5114]: I1210 15:49:48.026394 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:49:48 crc kubenswrapper[5114]: I1210 15:49:48.026408 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:49:48 crc kubenswrapper[5114]: I1210 15:49:48.026485 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:49:48 crc kubenswrapper[5114]: I1210 15:49:48.026506 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:49:48 crc kubenswrapper[5114]: I1210 15:49:48.026581 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:49:48 crc kubenswrapper[5114]: I1210 15:49:48.026586 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:49:48 crc kubenswrapper[5114]: I1210 15:49:48.026610 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:49:48 crc kubenswrapper[5114]: I1210 15:49:48.026634 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:49:48 crc kubenswrapper[5114]: I1210 15:49:48.026659 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:49:48 crc kubenswrapper[5114]: I1210 15:49:48.026682 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:49:48 crc kubenswrapper[5114]: I1210 15:49:48.026722 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:49:48 crc kubenswrapper[5114]: I1210 15:49:48.026728 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:49:48 crc kubenswrapper[5114]: I1210 15:49:48.026812 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:49:48 crc kubenswrapper[5114]: I1210 15:49:48.026858 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:49:48 crc kubenswrapper[5114]: I1210 15:49:48.026873 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:49:48 crc kubenswrapper[5114]: I1210 15:49:48.027302 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:49:48 crc kubenswrapper[5114]: I1210 15:49:48.028447 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:49:48 crc kubenswrapper[5114]: I1210 15:49:48.106045 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:49:48 crc kubenswrapper[5114]: W1210 15:49:48.128810 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7dbc7e1ee9c187a863ef9b473fad27b.slice/crio-1d97d88cf3a1c53ee27e210c98c595d457ee6b21baff9c622b1ff25085b2582c WatchSource:0}: Error finding container 1d97d88cf3a1c53ee27e210c98c595d457ee6b21baff9c622b1ff25085b2582c: Status 404 returned error can't find the container with id 1d97d88cf3a1c53ee27e210c98c595d457ee6b21baff9c622b1ff25085b2582c Dec 10 15:49:48 crc kubenswrapper[5114]: E1210 15:49:48.131827 5114 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.224:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187fe5620682eb30 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:49:48.131322672 +0000 UTC m=+213.852123839,LastTimestamp:2025-12-10 15:49:48.131322672 +0000 UTC m=+213.852123839,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:49:48 crc kubenswrapper[5114]: I1210 15:49:48.349190 5114 generic.go:358] "Generic (PLEG): container finished" podID="2d48d128-3260-43c9-ab7a-d41717d59b73" containerID="46573980633b4214a9f903f785503b79854fffc1be251a7966a4b3463188959f" exitCode=0 Dec 10 15:49:48 crc kubenswrapper[5114]: I1210 15:49:48.349260 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"2d48d128-3260-43c9-ab7a-d41717d59b73","Type":"ContainerDied","Data":"46573980633b4214a9f903f785503b79854fffc1be251a7966a4b3463188959f"} Dec 10 15:49:48 crc kubenswrapper[5114]: I1210 15:49:48.349933 5114 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Dec 10 15:49:48 crc kubenswrapper[5114]: I1210 15:49:48.350117 5114 status_manager.go:895] "Failed to get status for pod" podUID="2d48d128-3260-43c9-ab7a-d41717d59b73" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Dec 10 15:49:48 crc kubenswrapper[5114]: I1210 15:49:48.352087 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 10 15:49:48 crc kubenswrapper[5114]: I1210 15:49:48.353153 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 10 15:49:48 crc kubenswrapper[5114]: I1210 15:49:48.354014 5114 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="d79fc0ad78427693b9ef01519261c475c49b29ab8dc64210c09f22886b3dcfad" exitCode=0 Dec 10 15:49:48 crc kubenswrapper[5114]: I1210 15:49:48.354032 5114 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="0f8dd78b836cacc6ac7bee1a11730500c94192df5a045eb37ae1c137a3cc0ad6" exitCode=0 Dec 10 15:49:48 crc kubenswrapper[5114]: I1210 15:49:48.354044 5114 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="7398b71862f7cfabefc5644c5d6b4924bbde47edadad7f240aa37599d2b3da9d" exitCode=0 Dec 10 15:49:48 crc kubenswrapper[5114]: I1210 15:49:48.354052 5114 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="55ad03eb1a337191c414a5dbd0864a29632396ff234b68505a9a4b65c90d8eb5" exitCode=2 Dec 10 15:49:48 crc kubenswrapper[5114]: I1210 15:49:48.354080 5114 scope.go:117] "RemoveContainer" containerID="e1c010c37667d5c045e43048e4405a03d43afd6ebe7774038d9d5a5c5bb8aaf4" Dec 10 15:49:48 crc kubenswrapper[5114]: I1210 15:49:48.356019 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"1d97d88cf3a1c53ee27e210c98c595d457ee6b21baff9c622b1ff25085b2582c"} Dec 10 15:49:49 crc kubenswrapper[5114]: E1210 15:49:49.040067 5114 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" Dec 10 15:49:49 crc kubenswrapper[5114]: E1210 15:49:49.041357 5114 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" Dec 10 15:49:49 crc kubenswrapper[5114]: E1210 15:49:49.041897 5114 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" Dec 10 15:49:49 crc kubenswrapper[5114]: E1210 15:49:49.042427 5114 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" Dec 10 15:49:49 crc kubenswrapper[5114]: E1210 15:49:49.042907 5114 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" Dec 10 15:49:49 crc kubenswrapper[5114]: I1210 15:49:49.043020 5114 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 10 15:49:49 crc kubenswrapper[5114]: E1210 15:49:49.043649 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="200ms" Dec 10 15:49:49 crc kubenswrapper[5114]: E1210 15:49:49.244866 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="400ms" Dec 10 15:49:49 crc kubenswrapper[5114]: I1210 15:49:49.365006 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 10 15:49:49 crc kubenswrapper[5114]: I1210 15:49:49.368213 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"9c0efc558a013517a41146dfdc36c099a8758437d041575f5b97acda770e3623"} Dec 10 15:49:49 crc kubenswrapper[5114]: I1210 15:49:49.369554 5114 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Dec 10 15:49:49 crc kubenswrapper[5114]: I1210 15:49:49.370171 5114 status_manager.go:895] "Failed to get status for pod" podUID="2d48d128-3260-43c9-ab7a-d41717d59b73" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Dec 10 15:49:49 crc kubenswrapper[5114]: E1210 15:49:49.646229 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="800ms" Dec 10 15:49:49 crc kubenswrapper[5114]: I1210 15:49:49.761625 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 10 15:49:49 crc kubenswrapper[5114]: I1210 15:49:49.762328 5114 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Dec 10 15:49:49 crc kubenswrapper[5114]: I1210 15:49:49.762534 5114 status_manager.go:895] "Failed to get status for pod" podUID="2d48d128-3260-43c9-ab7a-d41717d59b73" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Dec 10 15:49:49 crc kubenswrapper[5114]: I1210 15:49:49.856671 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d48d128-3260-43c9-ab7a-d41717d59b73-kube-api-access\") pod \"2d48d128-3260-43c9-ab7a-d41717d59b73\" (UID: \"2d48d128-3260-43c9-ab7a-d41717d59b73\") " Dec 10 15:49:49 crc kubenswrapper[5114]: I1210 15:49:49.857166 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2d48d128-3260-43c9-ab7a-d41717d59b73-var-lock\") pod \"2d48d128-3260-43c9-ab7a-d41717d59b73\" (UID: \"2d48d128-3260-43c9-ab7a-d41717d59b73\") " Dec 10 15:49:49 crc kubenswrapper[5114]: I1210 15:49:49.857195 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d48d128-3260-43c9-ab7a-d41717d59b73-kubelet-dir\") pod \"2d48d128-3260-43c9-ab7a-d41717d59b73\" (UID: \"2d48d128-3260-43c9-ab7a-d41717d59b73\") " Dec 10 15:49:49 crc kubenswrapper[5114]: I1210 15:49:49.857333 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d48d128-3260-43c9-ab7a-d41717d59b73-var-lock" (OuterVolumeSpecName: "var-lock") pod "2d48d128-3260-43c9-ab7a-d41717d59b73" (UID: "2d48d128-3260-43c9-ab7a-d41717d59b73"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 15:49:49 crc kubenswrapper[5114]: I1210 15:49:49.857463 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d48d128-3260-43c9-ab7a-d41717d59b73-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2d48d128-3260-43c9-ab7a-d41717d59b73" (UID: "2d48d128-3260-43c9-ab7a-d41717d59b73"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 15:49:49 crc kubenswrapper[5114]: I1210 15:49:49.857610 5114 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2d48d128-3260-43c9-ab7a-d41717d59b73-var-lock\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:49 crc kubenswrapper[5114]: I1210 15:49:49.857631 5114 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d48d128-3260-43c9-ab7a-d41717d59b73-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:49 crc kubenswrapper[5114]: I1210 15:49:49.871025 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d48d128-3260-43c9-ab7a-d41717d59b73-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2d48d128-3260-43c9-ab7a-d41717d59b73" (UID: "2d48d128-3260-43c9-ab7a-d41717d59b73"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:49:49 crc kubenswrapper[5114]: I1210 15:49:49.959003 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d48d128-3260-43c9-ab7a-d41717d59b73-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:50 crc kubenswrapper[5114]: E1210 15:49:50.116293 5114 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.224:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187fe5620682eb30 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:49:48.131322672 +0000 UTC m=+213.852123839,LastTimestamp:2025-12-10 15:49:48.131322672 +0000 UTC m=+213.852123839,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.166361 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.166996 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.167603 5114 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.168104 5114 status_manager.go:895] "Failed to get status for pod" podUID="2d48d128-3260-43c9-ab7a-d41717d59b73" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.168366 5114 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.262591 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.262704 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.262714 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.262935 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.263003 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.263032 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.263054 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.263097 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.263469 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.263472 5114 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.263515 5114 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.263531 5114 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.265361 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.364791 5114 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.364824 5114 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.376130 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.377016 5114 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="c9a7475ba48862dfcb11fe65264384be264b4b7acd30761bc650e70dd3a78abb" exitCode=0 Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.377123 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.377219 5114 scope.go:117] "RemoveContainer" containerID="d79fc0ad78427693b9ef01519261c475c49b29ab8dc64210c09f22886b3dcfad" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.379128 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.379871 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"2d48d128-3260-43c9-ab7a-d41717d59b73","Type":"ContainerDied","Data":"c058f4e5b0c0525f17e3bf1a4d036a9dd1a61b3ff372361ae14688d265e39395"} Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.379909 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c058f4e5b0c0525f17e3bf1a4d036a9dd1a61b3ff372361ae14688d265e39395" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.394073 5114 scope.go:117] "RemoveContainer" containerID="0f8dd78b836cacc6ac7bee1a11730500c94192df5a045eb37ae1c137a3cc0ad6" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.399750 5114 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.400470 5114 status_manager.go:895] "Failed to get status for pod" podUID="2d48d128-3260-43c9-ab7a-d41717d59b73" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.400868 5114 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.401213 5114 status_manager.go:895] "Failed to get status for pod" podUID="2d48d128-3260-43c9-ab7a-d41717d59b73" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.401612 5114 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.402083 5114 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.406940 5114 scope.go:117] "RemoveContainer" containerID="7398b71862f7cfabefc5644c5d6b4924bbde47edadad7f240aa37599d2b3da9d" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.419049 5114 scope.go:117] "RemoveContainer" containerID="55ad03eb1a337191c414a5dbd0864a29632396ff234b68505a9a4b65c90d8eb5" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.434773 5114 scope.go:117] "RemoveContainer" containerID="c9a7475ba48862dfcb11fe65264384be264b4b7acd30761bc650e70dd3a78abb" Dec 10 15:49:50 crc kubenswrapper[5114]: E1210 15:49:50.447861 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="1.6s" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.448391 5114 scope.go:117] "RemoveContainer" containerID="7e3d3b6b0e188659783d2b384d22a05ba8962e4fa49cd4caae040921c9add613" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.487246 5114 scope.go:117] "RemoveContainer" containerID="d79fc0ad78427693b9ef01519261c475c49b29ab8dc64210c09f22886b3dcfad" Dec 10 15:49:50 crc kubenswrapper[5114]: E1210 15:49:50.487744 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d79fc0ad78427693b9ef01519261c475c49b29ab8dc64210c09f22886b3dcfad\": container with ID starting with d79fc0ad78427693b9ef01519261c475c49b29ab8dc64210c09f22886b3dcfad not found: ID does not exist" containerID="d79fc0ad78427693b9ef01519261c475c49b29ab8dc64210c09f22886b3dcfad" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.487856 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d79fc0ad78427693b9ef01519261c475c49b29ab8dc64210c09f22886b3dcfad"} err="failed to get container status \"d79fc0ad78427693b9ef01519261c475c49b29ab8dc64210c09f22886b3dcfad\": rpc error: code = NotFound desc = could not find container \"d79fc0ad78427693b9ef01519261c475c49b29ab8dc64210c09f22886b3dcfad\": container with ID starting with d79fc0ad78427693b9ef01519261c475c49b29ab8dc64210c09f22886b3dcfad not found: ID does not exist" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.487890 5114 scope.go:117] "RemoveContainer" containerID="0f8dd78b836cacc6ac7bee1a11730500c94192df5a045eb37ae1c137a3cc0ad6" Dec 10 15:49:50 crc kubenswrapper[5114]: E1210 15:49:50.488199 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f8dd78b836cacc6ac7bee1a11730500c94192df5a045eb37ae1c137a3cc0ad6\": container with ID starting with 0f8dd78b836cacc6ac7bee1a11730500c94192df5a045eb37ae1c137a3cc0ad6 not found: ID does not exist" containerID="0f8dd78b836cacc6ac7bee1a11730500c94192df5a045eb37ae1c137a3cc0ad6" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.488238 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f8dd78b836cacc6ac7bee1a11730500c94192df5a045eb37ae1c137a3cc0ad6"} err="failed to get container status \"0f8dd78b836cacc6ac7bee1a11730500c94192df5a045eb37ae1c137a3cc0ad6\": rpc error: code = NotFound desc = could not find container \"0f8dd78b836cacc6ac7bee1a11730500c94192df5a045eb37ae1c137a3cc0ad6\": container with ID starting with 0f8dd78b836cacc6ac7bee1a11730500c94192df5a045eb37ae1c137a3cc0ad6 not found: ID does not exist" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.488253 5114 scope.go:117] "RemoveContainer" containerID="7398b71862f7cfabefc5644c5d6b4924bbde47edadad7f240aa37599d2b3da9d" Dec 10 15:49:50 crc kubenswrapper[5114]: E1210 15:49:50.488541 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7398b71862f7cfabefc5644c5d6b4924bbde47edadad7f240aa37599d2b3da9d\": container with ID starting with 7398b71862f7cfabefc5644c5d6b4924bbde47edadad7f240aa37599d2b3da9d not found: ID does not exist" containerID="7398b71862f7cfabefc5644c5d6b4924bbde47edadad7f240aa37599d2b3da9d" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.488563 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7398b71862f7cfabefc5644c5d6b4924bbde47edadad7f240aa37599d2b3da9d"} err="failed to get container status \"7398b71862f7cfabefc5644c5d6b4924bbde47edadad7f240aa37599d2b3da9d\": rpc error: code = NotFound desc = could not find container \"7398b71862f7cfabefc5644c5d6b4924bbde47edadad7f240aa37599d2b3da9d\": container with ID starting with 7398b71862f7cfabefc5644c5d6b4924bbde47edadad7f240aa37599d2b3da9d not found: ID does not exist" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.488582 5114 scope.go:117] "RemoveContainer" containerID="55ad03eb1a337191c414a5dbd0864a29632396ff234b68505a9a4b65c90d8eb5" Dec 10 15:49:50 crc kubenswrapper[5114]: E1210 15:49:50.488905 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55ad03eb1a337191c414a5dbd0864a29632396ff234b68505a9a4b65c90d8eb5\": container with ID starting with 55ad03eb1a337191c414a5dbd0864a29632396ff234b68505a9a4b65c90d8eb5 not found: ID does not exist" containerID="55ad03eb1a337191c414a5dbd0864a29632396ff234b68505a9a4b65c90d8eb5" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.488942 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55ad03eb1a337191c414a5dbd0864a29632396ff234b68505a9a4b65c90d8eb5"} err="failed to get container status \"55ad03eb1a337191c414a5dbd0864a29632396ff234b68505a9a4b65c90d8eb5\": rpc error: code = NotFound desc = could not find container \"55ad03eb1a337191c414a5dbd0864a29632396ff234b68505a9a4b65c90d8eb5\": container with ID starting with 55ad03eb1a337191c414a5dbd0864a29632396ff234b68505a9a4b65c90d8eb5 not found: ID does not exist" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.488967 5114 scope.go:117] "RemoveContainer" containerID="c9a7475ba48862dfcb11fe65264384be264b4b7acd30761bc650e70dd3a78abb" Dec 10 15:49:50 crc kubenswrapper[5114]: E1210 15:49:50.489233 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9a7475ba48862dfcb11fe65264384be264b4b7acd30761bc650e70dd3a78abb\": container with ID starting with c9a7475ba48862dfcb11fe65264384be264b4b7acd30761bc650e70dd3a78abb not found: ID does not exist" containerID="c9a7475ba48862dfcb11fe65264384be264b4b7acd30761bc650e70dd3a78abb" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.489267 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9a7475ba48862dfcb11fe65264384be264b4b7acd30761bc650e70dd3a78abb"} err="failed to get container status \"c9a7475ba48862dfcb11fe65264384be264b4b7acd30761bc650e70dd3a78abb\": rpc error: code = NotFound desc = could not find container \"c9a7475ba48862dfcb11fe65264384be264b4b7acd30761bc650e70dd3a78abb\": container with ID starting with c9a7475ba48862dfcb11fe65264384be264b4b7acd30761bc650e70dd3a78abb not found: ID does not exist" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.489452 5114 scope.go:117] "RemoveContainer" containerID="7e3d3b6b0e188659783d2b384d22a05ba8962e4fa49cd4caae040921c9add613" Dec 10 15:49:50 crc kubenswrapper[5114]: E1210 15:49:50.489743 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e3d3b6b0e188659783d2b384d22a05ba8962e4fa49cd4caae040921c9add613\": container with ID starting with 7e3d3b6b0e188659783d2b384d22a05ba8962e4fa49cd4caae040921c9add613 not found: ID does not exist" containerID="7e3d3b6b0e188659783d2b384d22a05ba8962e4fa49cd4caae040921c9add613" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.489774 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e3d3b6b0e188659783d2b384d22a05ba8962e4fa49cd4caae040921c9add613"} err="failed to get container status \"7e3d3b6b0e188659783d2b384d22a05ba8962e4fa49cd4caae040921c9add613\": rpc error: code = NotFound desc = could not find container \"7e3d3b6b0e188659783d2b384d22a05ba8962e4fa49cd4caae040921c9add613\": container with ID starting with 7e3d3b6b0e188659783d2b384d22a05ba8962e4fa49cd4caae040921c9add613 not found: ID does not exist" Dec 10 15:49:50 crc kubenswrapper[5114]: I1210 15:49:50.575437 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Dec 10 15:49:52 crc kubenswrapper[5114]: E1210 15:49:52.049094 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="3.2s" Dec 10 15:49:54 crc kubenswrapper[5114]: I1210 15:49:54.573793 5114 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Dec 10 15:49:54 crc kubenswrapper[5114]: I1210 15:49:54.574485 5114 status_manager.go:895] "Failed to get status for pod" podUID="2d48d128-3260-43c9-ab7a-d41717d59b73" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Dec 10 15:49:55 crc kubenswrapper[5114]: E1210 15:49:55.250051 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="6.4s" Dec 10 15:50:00 crc kubenswrapper[5114]: E1210 15:50:00.117921 5114 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.224:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187fe5620682eb30 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:49:48.131322672 +0000 UTC m=+213.852123839,LastTimestamp:2025-12-10 15:49:48.131322672 +0000 UTC m=+213.852123839,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:50:01 crc kubenswrapper[5114]: I1210 15:50:01.467500 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 10 15:50:01 crc kubenswrapper[5114]: I1210 15:50:01.468296 5114 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="9ec7a41d072aa02f59def36f4c2802872ef70cbd48046c3e3d6f6ccd6b254c53" exitCode=1 Dec 10 15:50:01 crc kubenswrapper[5114]: I1210 15:50:01.468413 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"9ec7a41d072aa02f59def36f4c2802872ef70cbd48046c3e3d6f6ccd6b254c53"} Dec 10 15:50:01 crc kubenswrapper[5114]: I1210 15:50:01.469032 5114 scope.go:117] "RemoveContainer" containerID="9ec7a41d072aa02f59def36f4c2802872ef70cbd48046c3e3d6f6ccd6b254c53" Dec 10 15:50:01 crc kubenswrapper[5114]: I1210 15:50:01.469432 5114 status_manager.go:895] "Failed to get status for pod" podUID="2d48d128-3260-43c9-ab7a-d41717d59b73" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Dec 10 15:50:01 crc kubenswrapper[5114]: I1210 15:50:01.469768 5114 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Dec 10 15:50:01 crc kubenswrapper[5114]: I1210 15:50:01.470200 5114 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Dec 10 15:50:01 crc kubenswrapper[5114]: E1210 15:50:01.651226 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="7s" Dec 10 15:50:02 crc kubenswrapper[5114]: I1210 15:50:02.477888 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 10 15:50:02 crc kubenswrapper[5114]: I1210 15:50:02.478075 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"7cbaf697958fef621c4dcd039fc4c04614d3d35d637358fa8d48a5191ad44814"} Dec 10 15:50:02 crc kubenswrapper[5114]: I1210 15:50:02.479714 5114 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Dec 10 15:50:02 crc kubenswrapper[5114]: I1210 15:50:02.480403 5114 status_manager.go:895] "Failed to get status for pod" podUID="2d48d128-3260-43c9-ab7a-d41717d59b73" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Dec 10 15:50:02 crc kubenswrapper[5114]: I1210 15:50:02.481009 5114 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Dec 10 15:50:02 crc kubenswrapper[5114]: I1210 15:50:02.568533 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:50:02 crc kubenswrapper[5114]: I1210 15:50:02.571663 5114 status_manager.go:895] "Failed to get status for pod" podUID="2d48d128-3260-43c9-ab7a-d41717d59b73" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Dec 10 15:50:02 crc kubenswrapper[5114]: I1210 15:50:02.573836 5114 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Dec 10 15:50:02 crc kubenswrapper[5114]: I1210 15:50:02.574424 5114 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Dec 10 15:50:02 crc kubenswrapper[5114]: I1210 15:50:02.585469 5114 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e331166d-a33f-44c1-9a3e-f43cfee598a8" Dec 10 15:50:02 crc kubenswrapper[5114]: I1210 15:50:02.585690 5114 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e331166d-a33f-44c1-9a3e-f43cfee598a8" Dec 10 15:50:02 crc kubenswrapper[5114]: E1210 15:50:02.586345 5114 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:50:02 crc kubenswrapper[5114]: I1210 15:50:02.586680 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:50:03 crc kubenswrapper[5114]: I1210 15:50:03.485362 5114 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="9d48639b669e0a0270ea0f4a52c4c65d49fc6e951a430efd4d07edc039ac4bd6" exitCode=0 Dec 10 15:50:03 crc kubenswrapper[5114]: I1210 15:50:03.485484 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"9d48639b669e0a0270ea0f4a52c4c65d49fc6e951a430efd4d07edc039ac4bd6"} Dec 10 15:50:03 crc kubenswrapper[5114]: I1210 15:50:03.485919 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"2c2098926cc532c35de1ae2d6f70c5dc45398bc974c89ee345a048619bf352ad"} Dec 10 15:50:03 crc kubenswrapper[5114]: I1210 15:50:03.486420 5114 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e331166d-a33f-44c1-9a3e-f43cfee598a8" Dec 10 15:50:03 crc kubenswrapper[5114]: I1210 15:50:03.486438 5114 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e331166d-a33f-44c1-9a3e-f43cfee598a8" Dec 10 15:50:03 crc kubenswrapper[5114]: I1210 15:50:03.486872 5114 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Dec 10 15:50:03 crc kubenswrapper[5114]: I1210 15:50:03.487112 5114 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Dec 10 15:50:03 crc kubenswrapper[5114]: E1210 15:50:03.487135 5114 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:50:03 crc kubenswrapper[5114]: I1210 15:50:03.487355 5114 status_manager.go:895] "Failed to get status for pod" podUID="2d48d128-3260-43c9-ab7a-d41717d59b73" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Dec 10 15:50:04 crc kubenswrapper[5114]: I1210 15:50:04.492966 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"1d0363745cc3161adc687c48cff580bfa5d027d6b5d8cb4ae857f27433159e28"} Dec 10 15:50:04 crc kubenswrapper[5114]: I1210 15:50:04.493313 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"5ea269361b75260efdc3de76dc0ae973e16777ebf2d1156531de51743413079a"} Dec 10 15:50:04 crc kubenswrapper[5114]: I1210 15:50:04.493326 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"0c73a41c12f5925a5dabac02e29b58601befdefc2edafd45a8e35e96f9bb7829"} Dec 10 15:50:05 crc kubenswrapper[5114]: I1210 15:50:05.500774 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"7f0e7b09f060819e473bdb23610611621ecfe5bcec025c00d45dbcaa4a7be7d2"} Dec 10 15:50:05 crc kubenswrapper[5114]: I1210 15:50:05.500829 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"6070dc4292e475d5bc118c64947ca70b72fb4787d5879088bb472e0c2b2cdb86"} Dec 10 15:50:05 crc kubenswrapper[5114]: I1210 15:50:05.501017 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:50:05 crc kubenswrapper[5114]: I1210 15:50:05.501124 5114 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e331166d-a33f-44c1-9a3e-f43cfee598a8" Dec 10 15:50:05 crc kubenswrapper[5114]: I1210 15:50:05.501142 5114 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e331166d-a33f-44c1-9a3e-f43cfee598a8" Dec 10 15:50:05 crc kubenswrapper[5114]: I1210 15:50:05.685547 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:50:07 crc kubenswrapper[5114]: I1210 15:50:07.587857 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:50:07 crc kubenswrapper[5114]: I1210 15:50:07.588163 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:50:07 crc kubenswrapper[5114]: I1210 15:50:07.594144 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:50:08 crc kubenswrapper[5114]: I1210 15:50:08.642572 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:50:08 crc kubenswrapper[5114]: I1210 15:50:08.643060 5114 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 10 15:50:08 crc kubenswrapper[5114]: I1210 15:50:08.643125 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 10 15:50:10 crc kubenswrapper[5114]: I1210 15:50:10.510046 5114 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:50:10 crc kubenswrapper[5114]: I1210 15:50:10.510363 5114 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:50:10 crc kubenswrapper[5114]: I1210 15:50:10.527889 5114 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e331166d-a33f-44c1-9a3e-f43cfee598a8" Dec 10 15:50:10 crc kubenswrapper[5114]: I1210 15:50:10.527923 5114 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e331166d-a33f-44c1-9a3e-f43cfee598a8" Dec 10 15:50:10 crc kubenswrapper[5114]: I1210 15:50:10.532331 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:50:10 crc kubenswrapper[5114]: I1210 15:50:10.534794 5114 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="30c7728e-12e1-4974-95fe-dd48b62ca083" Dec 10 15:50:11 crc kubenswrapper[5114]: I1210 15:50:11.532557 5114 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e331166d-a33f-44c1-9a3e-f43cfee598a8" Dec 10 15:50:11 crc kubenswrapper[5114]: I1210 15:50:11.532588 5114 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e331166d-a33f-44c1-9a3e-f43cfee598a8" Dec 10 15:50:14 crc kubenswrapper[5114]: I1210 15:50:14.585793 5114 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="30c7728e-12e1-4974-95fe-dd48b62ca083" Dec 10 15:50:18 crc kubenswrapper[5114]: I1210 15:50:18.643048 5114 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 10 15:50:18 crc kubenswrapper[5114]: I1210 15:50:18.644425 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 10 15:50:20 crc kubenswrapper[5114]: I1210 15:50:20.079826 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 10 15:50:20 crc kubenswrapper[5114]: I1210 15:50:20.365402 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 10 15:50:20 crc kubenswrapper[5114]: I1210 15:50:20.517344 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 10 15:50:21 crc kubenswrapper[5114]: I1210 15:50:21.026254 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 10 15:50:21 crc kubenswrapper[5114]: I1210 15:50:21.168772 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 10 15:50:21 crc kubenswrapper[5114]: I1210 15:50:21.171445 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 10 15:50:21 crc kubenswrapper[5114]: I1210 15:50:21.177065 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 10 15:50:21 crc kubenswrapper[5114]: I1210 15:50:21.657387 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 10 15:50:21 crc kubenswrapper[5114]: I1210 15:50:21.696148 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 10 15:50:21 crc kubenswrapper[5114]: I1210 15:50:21.823830 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 10 15:50:21 crc kubenswrapper[5114]: I1210 15:50:21.876894 5114 patch_prober.go:28] interesting pod/machine-config-daemon-pvhhc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:50:21 crc kubenswrapper[5114]: I1210 15:50:21.877395 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" podUID="b38ac556-07b2-4e25-9595-6adae4fcecb7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:50:21 crc kubenswrapper[5114]: I1210 15:50:21.993818 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 10 15:50:22 crc kubenswrapper[5114]: I1210 15:50:22.259472 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 10 15:50:22 crc kubenswrapper[5114]: I1210 15:50:22.424709 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 10 15:50:22 crc kubenswrapper[5114]: I1210 15:50:22.855444 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 10 15:50:22 crc kubenswrapper[5114]: I1210 15:50:22.885760 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 10 15:50:23 crc kubenswrapper[5114]: I1210 15:50:23.035566 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 10 15:50:23 crc kubenswrapper[5114]: I1210 15:50:23.087460 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 10 15:50:23 crc kubenswrapper[5114]: I1210 15:50:23.147563 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 10 15:50:23 crc kubenswrapper[5114]: I1210 15:50:23.211631 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 10 15:50:23 crc kubenswrapper[5114]: I1210 15:50:23.261779 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 10 15:50:23 crc kubenswrapper[5114]: I1210 15:50:23.327720 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:50:23 crc kubenswrapper[5114]: I1210 15:50:23.374723 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 10 15:50:23 crc kubenswrapper[5114]: I1210 15:50:23.379962 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 10 15:50:23 crc kubenswrapper[5114]: I1210 15:50:23.415176 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 10 15:50:23 crc kubenswrapper[5114]: I1210 15:50:23.479813 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 10 15:50:23 crc kubenswrapper[5114]: I1210 15:50:23.518859 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 10 15:50:23 crc kubenswrapper[5114]: I1210 15:50:23.539063 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 10 15:50:23 crc kubenswrapper[5114]: I1210 15:50:23.590935 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 10 15:50:23 crc kubenswrapper[5114]: I1210 15:50:23.619077 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 10 15:50:23 crc kubenswrapper[5114]: I1210 15:50:23.651001 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 10 15:50:23 crc kubenswrapper[5114]: I1210 15:50:23.671211 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:50:23 crc kubenswrapper[5114]: I1210 15:50:23.682118 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 10 15:50:23 crc kubenswrapper[5114]: I1210 15:50:23.748358 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 10 15:50:23 crc kubenswrapper[5114]: I1210 15:50:23.753888 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 10 15:50:23 crc kubenswrapper[5114]: I1210 15:50:23.770914 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 10 15:50:23 crc kubenswrapper[5114]: I1210 15:50:23.820269 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 10 15:50:23 crc kubenswrapper[5114]: I1210 15:50:23.925853 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 10 15:50:24 crc kubenswrapper[5114]: I1210 15:50:24.018207 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 10 15:50:24 crc kubenswrapper[5114]: I1210 15:50:24.019258 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 10 15:50:24 crc kubenswrapper[5114]: I1210 15:50:24.041813 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 10 15:50:24 crc kubenswrapper[5114]: I1210 15:50:24.071079 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 10 15:50:24 crc kubenswrapper[5114]: I1210 15:50:24.151996 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 10 15:50:24 crc kubenswrapper[5114]: I1210 15:50:24.252567 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 10 15:50:24 crc kubenswrapper[5114]: I1210 15:50:24.265930 5114 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 10 15:50:24 crc kubenswrapper[5114]: I1210 15:50:24.366587 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 10 15:50:24 crc kubenswrapper[5114]: I1210 15:50:24.368421 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 10 15:50:24 crc kubenswrapper[5114]: I1210 15:50:24.397916 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 10 15:50:24 crc kubenswrapper[5114]: I1210 15:50:24.459955 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 10 15:50:24 crc kubenswrapper[5114]: I1210 15:50:24.474552 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 10 15:50:24 crc kubenswrapper[5114]: I1210 15:50:24.514035 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 10 15:50:24 crc kubenswrapper[5114]: I1210 15:50:24.546718 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 10 15:50:24 crc kubenswrapper[5114]: I1210 15:50:24.587674 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 10 15:50:24 crc kubenswrapper[5114]: I1210 15:50:24.658140 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 10 15:50:24 crc kubenswrapper[5114]: I1210 15:50:24.680496 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 10 15:50:24 crc kubenswrapper[5114]: I1210 15:50:24.722784 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 10 15:50:24 crc kubenswrapper[5114]: I1210 15:50:24.729135 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 10 15:50:24 crc kubenswrapper[5114]: I1210 15:50:24.745998 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 10 15:50:24 crc kubenswrapper[5114]: I1210 15:50:24.771979 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 10 15:50:24 crc kubenswrapper[5114]: I1210 15:50:24.832350 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 10 15:50:24 crc kubenswrapper[5114]: I1210 15:50:24.881464 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 10 15:50:24 crc kubenswrapper[5114]: I1210 15:50:24.903372 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 10 15:50:24 crc kubenswrapper[5114]: I1210 15:50:24.940632 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 10 15:50:25 crc kubenswrapper[5114]: I1210 15:50:25.002471 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 10 15:50:25 crc kubenswrapper[5114]: I1210 15:50:25.005608 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 10 15:50:25 crc kubenswrapper[5114]: I1210 15:50:25.012579 5114 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 10 15:50:25 crc kubenswrapper[5114]: I1210 15:50:25.193891 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 10 15:50:25 crc kubenswrapper[5114]: I1210 15:50:25.203445 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 10 15:50:25 crc kubenswrapper[5114]: I1210 15:50:25.219062 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 10 15:50:25 crc kubenswrapper[5114]: I1210 15:50:25.239413 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 10 15:50:25 crc kubenswrapper[5114]: I1210 15:50:25.372634 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 10 15:50:25 crc kubenswrapper[5114]: I1210 15:50:25.435092 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 10 15:50:25 crc kubenswrapper[5114]: I1210 15:50:25.435721 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 10 15:50:25 crc kubenswrapper[5114]: I1210 15:50:25.443877 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 10 15:50:25 crc kubenswrapper[5114]: I1210 15:50:25.455290 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 10 15:50:25 crc kubenswrapper[5114]: I1210 15:50:25.497187 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 10 15:50:25 crc kubenswrapper[5114]: I1210 15:50:25.617589 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 10 15:50:25 crc kubenswrapper[5114]: I1210 15:50:25.622106 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 10 15:50:25 crc kubenswrapper[5114]: I1210 15:50:25.739734 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 10 15:50:25 crc kubenswrapper[5114]: I1210 15:50:25.779832 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 10 15:50:25 crc kubenswrapper[5114]: I1210 15:50:25.793838 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 10 15:50:25 crc kubenswrapper[5114]: I1210 15:50:25.820068 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 10 15:50:25 crc kubenswrapper[5114]: I1210 15:50:25.843035 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 10 15:50:25 crc kubenswrapper[5114]: I1210 15:50:25.866257 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 10 15:50:25 crc kubenswrapper[5114]: I1210 15:50:25.936770 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 10 15:50:26 crc kubenswrapper[5114]: I1210 15:50:26.024869 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 10 15:50:26 crc kubenswrapper[5114]: I1210 15:50:26.057181 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 10 15:50:26 crc kubenswrapper[5114]: I1210 15:50:26.077988 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 10 15:50:26 crc kubenswrapper[5114]: I1210 15:50:26.146399 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 10 15:50:26 crc kubenswrapper[5114]: I1210 15:50:26.461692 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 10 15:50:26 crc kubenswrapper[5114]: I1210 15:50:26.498736 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 10 15:50:26 crc kubenswrapper[5114]: I1210 15:50:26.687248 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 10 15:50:26 crc kubenswrapper[5114]: I1210 15:50:26.725049 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 10 15:50:26 crc kubenswrapper[5114]: I1210 15:50:26.810661 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:50:26 crc kubenswrapper[5114]: I1210 15:50:26.858738 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 10 15:50:26 crc kubenswrapper[5114]: I1210 15:50:26.971987 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 10 15:50:27 crc kubenswrapper[5114]: I1210 15:50:27.021865 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 10 15:50:27 crc kubenswrapper[5114]: I1210 15:50:27.059902 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 10 15:50:27 crc kubenswrapper[5114]: I1210 15:50:27.086112 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 10 15:50:27 crc kubenswrapper[5114]: I1210 15:50:27.153692 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 10 15:50:27 crc kubenswrapper[5114]: I1210 15:50:27.338830 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 10 15:50:27 crc kubenswrapper[5114]: I1210 15:50:27.401924 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 10 15:50:27 crc kubenswrapper[5114]: I1210 15:50:27.408913 5114 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 10 15:50:27 crc kubenswrapper[5114]: I1210 15:50:27.409877 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=40.409864751 podStartE2EDuration="40.409864751s" podCreationTimestamp="2025-12-10 15:49:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:50:10.28573757 +0000 UTC m=+236.006538747" watchObservedRunningTime="2025-12-10 15:50:27.409864751 +0000 UTC m=+253.130665928" Dec 10 15:50:27 crc kubenswrapper[5114]: I1210 15:50:27.419864 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 10 15:50:27 crc kubenswrapper[5114]: I1210 15:50:27.419919 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 10 15:50:27 crc kubenswrapper[5114]: I1210 15:50:27.424614 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:50:27 crc kubenswrapper[5114]: I1210 15:50:27.426494 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 10 15:50:27 crc kubenswrapper[5114]: I1210 15:50:27.439993 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=17.439978142 podStartE2EDuration="17.439978142s" podCreationTimestamp="2025-12-10 15:50:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:50:27.436047792 +0000 UTC m=+253.156848969" watchObservedRunningTime="2025-12-10 15:50:27.439978142 +0000 UTC m=+253.160779319" Dec 10 15:50:27 crc kubenswrapper[5114]: I1210 15:50:27.588820 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:50:27 crc kubenswrapper[5114]: I1210 15:50:27.665635 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 10 15:50:27 crc kubenswrapper[5114]: I1210 15:50:27.694455 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 10 15:50:27 crc kubenswrapper[5114]: I1210 15:50:27.740307 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 10 15:50:27 crc kubenswrapper[5114]: I1210 15:50:27.746040 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 10 15:50:27 crc kubenswrapper[5114]: I1210 15:50:27.774202 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 10 15:50:27 crc kubenswrapper[5114]: I1210 15:50:27.902192 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 10 15:50:28 crc kubenswrapper[5114]: I1210 15:50:28.006229 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 10 15:50:28 crc kubenswrapper[5114]: I1210 15:50:28.092137 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 10 15:50:28 crc kubenswrapper[5114]: I1210 15:50:28.203815 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 10 15:50:28 crc kubenswrapper[5114]: I1210 15:50:28.229948 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 10 15:50:28 crc kubenswrapper[5114]: I1210 15:50:28.256480 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 10 15:50:28 crc kubenswrapper[5114]: I1210 15:50:28.270760 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 10 15:50:28 crc kubenswrapper[5114]: I1210 15:50:28.278256 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 10 15:50:28 crc kubenswrapper[5114]: I1210 15:50:28.280422 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 10 15:50:28 crc kubenswrapper[5114]: I1210 15:50:28.320035 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 10 15:50:28 crc kubenswrapper[5114]: I1210 15:50:28.376351 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 10 15:50:28 crc kubenswrapper[5114]: I1210 15:50:28.427958 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 10 15:50:28 crc kubenswrapper[5114]: I1210 15:50:28.488159 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 10 15:50:28 crc kubenswrapper[5114]: I1210 15:50:28.491945 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:50:28 crc kubenswrapper[5114]: I1210 15:50:28.571805 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 10 15:50:28 crc kubenswrapper[5114]: I1210 15:50:28.618643 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 10 15:50:28 crc kubenswrapper[5114]: I1210 15:50:28.629103 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 10 15:50:28 crc kubenswrapper[5114]: I1210 15:50:28.643785 5114 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 10 15:50:28 crc kubenswrapper[5114]: I1210 15:50:28.643877 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 10 15:50:28 crc kubenswrapper[5114]: I1210 15:50:28.643931 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:50:28 crc kubenswrapper[5114]: I1210 15:50:28.644654 5114 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"7cbaf697958fef621c4dcd039fc4c04614d3d35d637358fa8d48a5191ad44814"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Dec 10 15:50:28 crc kubenswrapper[5114]: I1210 15:50:28.644760 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" containerID="cri-o://7cbaf697958fef621c4dcd039fc4c04614d3d35d637358fa8d48a5191ad44814" gracePeriod=30 Dec 10 15:50:28 crc kubenswrapper[5114]: I1210 15:50:28.671889 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 10 15:50:28 crc kubenswrapper[5114]: I1210 15:50:28.747468 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 10 15:50:28 crc kubenswrapper[5114]: I1210 15:50:28.781803 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 10 15:50:28 crc kubenswrapper[5114]: I1210 15:50:28.787972 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 10 15:50:28 crc kubenswrapper[5114]: I1210 15:50:28.842883 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 10 15:50:28 crc kubenswrapper[5114]: I1210 15:50:28.930206 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 10 15:50:28 crc kubenswrapper[5114]: I1210 15:50:28.933371 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 10 15:50:28 crc kubenswrapper[5114]: I1210 15:50:28.966154 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 10 15:50:29 crc kubenswrapper[5114]: I1210 15:50:29.010095 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 10 15:50:29 crc kubenswrapper[5114]: I1210 15:50:29.051977 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 10 15:50:29 crc kubenswrapper[5114]: I1210 15:50:29.249006 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 10 15:50:29 crc kubenswrapper[5114]: I1210 15:50:29.255225 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 10 15:50:29 crc kubenswrapper[5114]: I1210 15:50:29.322454 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 10 15:50:29 crc kubenswrapper[5114]: I1210 15:50:29.333223 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 10 15:50:29 crc kubenswrapper[5114]: I1210 15:50:29.343383 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 10 15:50:29 crc kubenswrapper[5114]: I1210 15:50:29.354750 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 10 15:50:29 crc kubenswrapper[5114]: I1210 15:50:29.410293 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 10 15:50:29 crc kubenswrapper[5114]: I1210 15:50:29.428899 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 10 15:50:29 crc kubenswrapper[5114]: I1210 15:50:29.445364 5114 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 10 15:50:29 crc kubenswrapper[5114]: I1210 15:50:29.467702 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:50:29 crc kubenswrapper[5114]: I1210 15:50:29.579824 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 10 15:50:29 crc kubenswrapper[5114]: I1210 15:50:29.649579 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 10 15:50:29 crc kubenswrapper[5114]: I1210 15:50:29.653587 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 10 15:50:29 crc kubenswrapper[5114]: I1210 15:50:29.678265 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 10 15:50:29 crc kubenswrapper[5114]: I1210 15:50:29.707862 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 10 15:50:29 crc kubenswrapper[5114]: I1210 15:50:29.710927 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 10 15:50:29 crc kubenswrapper[5114]: I1210 15:50:29.827331 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 10 15:50:29 crc kubenswrapper[5114]: I1210 15:50:29.848926 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 10 15:50:29 crc kubenswrapper[5114]: I1210 15:50:29.935393 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 10 15:50:29 crc kubenswrapper[5114]: I1210 15:50:29.939942 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 10 15:50:29 crc kubenswrapper[5114]: I1210 15:50:29.941654 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 10 15:50:29 crc kubenswrapper[5114]: I1210 15:50:29.989207 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 10 15:50:30 crc kubenswrapper[5114]: I1210 15:50:30.028197 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 10 15:50:30 crc kubenswrapper[5114]: I1210 15:50:30.070384 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 10 15:50:30 crc kubenswrapper[5114]: I1210 15:50:30.076512 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 10 15:50:30 crc kubenswrapper[5114]: I1210 15:50:30.170563 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 10 15:50:30 crc kubenswrapper[5114]: I1210 15:50:30.187808 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 10 15:50:30 crc kubenswrapper[5114]: I1210 15:50:30.204054 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 10 15:50:30 crc kubenswrapper[5114]: I1210 15:50:30.215941 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 10 15:50:30 crc kubenswrapper[5114]: I1210 15:50:30.268062 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 10 15:50:30 crc kubenswrapper[5114]: I1210 15:50:30.281877 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 10 15:50:30 crc kubenswrapper[5114]: I1210 15:50:30.326559 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 10 15:50:30 crc kubenswrapper[5114]: I1210 15:50:30.508713 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 10 15:50:30 crc kubenswrapper[5114]: I1210 15:50:30.532802 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 10 15:50:30 crc kubenswrapper[5114]: I1210 15:50:30.584887 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 10 15:50:30 crc kubenswrapper[5114]: I1210 15:50:30.590306 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 10 15:50:30 crc kubenswrapper[5114]: I1210 15:50:30.600804 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 10 15:50:30 crc kubenswrapper[5114]: I1210 15:50:30.625696 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 10 15:50:30 crc kubenswrapper[5114]: I1210 15:50:30.673351 5114 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 10 15:50:30 crc kubenswrapper[5114]: I1210 15:50:30.673555 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 10 15:50:30 crc kubenswrapper[5114]: I1210 15:50:30.729575 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:50:30 crc kubenswrapper[5114]: I1210 15:50:30.790685 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 10 15:50:30 crc kubenswrapper[5114]: I1210 15:50:30.928830 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 10 15:50:31 crc kubenswrapper[5114]: I1210 15:50:31.015428 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 10 15:50:31 crc kubenswrapper[5114]: I1210 15:50:31.099369 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 10 15:50:31 crc kubenswrapper[5114]: I1210 15:50:31.144306 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 10 15:50:31 crc kubenswrapper[5114]: I1210 15:50:31.301636 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 10 15:50:31 crc kubenswrapper[5114]: I1210 15:50:31.345626 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 10 15:50:31 crc kubenswrapper[5114]: I1210 15:50:31.376675 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 10 15:50:31 crc kubenswrapper[5114]: I1210 15:50:31.428862 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 10 15:50:31 crc kubenswrapper[5114]: I1210 15:50:31.441338 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 10 15:50:31 crc kubenswrapper[5114]: I1210 15:50:31.570226 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 10 15:50:31 crc kubenswrapper[5114]: I1210 15:50:31.601595 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 10 15:50:31 crc kubenswrapper[5114]: I1210 15:50:31.713495 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 10 15:50:31 crc kubenswrapper[5114]: I1210 15:50:31.757637 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 10 15:50:31 crc kubenswrapper[5114]: I1210 15:50:31.820613 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 10 15:50:32 crc kubenswrapper[5114]: I1210 15:50:32.017454 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 10 15:50:32 crc kubenswrapper[5114]: I1210 15:50:32.164009 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 10 15:50:32 crc kubenswrapper[5114]: I1210 15:50:32.352824 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 10 15:50:32 crc kubenswrapper[5114]: I1210 15:50:32.363196 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 10 15:50:32 crc kubenswrapper[5114]: I1210 15:50:32.375509 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 10 15:50:32 crc kubenswrapper[5114]: I1210 15:50:32.509556 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 10 15:50:32 crc kubenswrapper[5114]: I1210 15:50:32.540933 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 10 15:50:32 crc kubenswrapper[5114]: I1210 15:50:32.595169 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 10 15:50:32 crc kubenswrapper[5114]: I1210 15:50:32.636672 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 10 15:50:32 crc kubenswrapper[5114]: I1210 15:50:32.730325 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:50:32 crc kubenswrapper[5114]: I1210 15:50:32.740624 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 10 15:50:32 crc kubenswrapper[5114]: I1210 15:50:32.749247 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 10 15:50:32 crc kubenswrapper[5114]: I1210 15:50:32.806335 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 10 15:50:32 crc kubenswrapper[5114]: I1210 15:50:32.814005 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 10 15:50:32 crc kubenswrapper[5114]: I1210 15:50:32.817477 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 10 15:50:32 crc kubenswrapper[5114]: I1210 15:50:32.854244 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:50:32 crc kubenswrapper[5114]: I1210 15:50:32.912993 5114 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 10 15:50:32 crc kubenswrapper[5114]: I1210 15:50:32.913342 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://9c0efc558a013517a41146dfdc36c099a8758437d041575f5b97acda770e3623" gracePeriod=5 Dec 10 15:50:32 crc kubenswrapper[5114]: I1210 15:50:32.922300 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 10 15:50:32 crc kubenswrapper[5114]: I1210 15:50:32.925052 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 10 15:50:32 crc kubenswrapper[5114]: I1210 15:50:32.978086 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 10 15:50:32 crc kubenswrapper[5114]: I1210 15:50:32.979674 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:50:33 crc kubenswrapper[5114]: I1210 15:50:33.208156 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 10 15:50:33 crc kubenswrapper[5114]: I1210 15:50:33.217761 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 10 15:50:33 crc kubenswrapper[5114]: I1210 15:50:33.222031 5114 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 10 15:50:33 crc kubenswrapper[5114]: I1210 15:50:33.268919 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 10 15:50:33 crc kubenswrapper[5114]: I1210 15:50:33.447396 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 10 15:50:33 crc kubenswrapper[5114]: I1210 15:50:33.477237 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 10 15:50:33 crc kubenswrapper[5114]: I1210 15:50:33.792851 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 10 15:50:33 crc kubenswrapper[5114]: I1210 15:50:33.862580 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 10 15:50:33 crc kubenswrapper[5114]: I1210 15:50:33.891808 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 10 15:50:34 crc kubenswrapper[5114]: I1210 15:50:34.042753 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 10 15:50:34 crc kubenswrapper[5114]: I1210 15:50:34.139703 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:50:34 crc kubenswrapper[5114]: I1210 15:50:34.237400 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 10 15:50:34 crc kubenswrapper[5114]: I1210 15:50:34.245663 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 10 15:50:34 crc kubenswrapper[5114]: I1210 15:50:34.355046 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 10 15:50:34 crc kubenswrapper[5114]: I1210 15:50:34.473137 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 10 15:50:34 crc kubenswrapper[5114]: I1210 15:50:34.623014 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 10 15:50:34 crc kubenswrapper[5114]: I1210 15:50:34.658305 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 10 15:50:34 crc kubenswrapper[5114]: I1210 15:50:34.701136 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 10 15:50:34 crc kubenswrapper[5114]: I1210 15:50:34.857919 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 10 15:50:34 crc kubenswrapper[5114]: I1210 15:50:34.912977 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 10 15:50:35 crc kubenswrapper[5114]: I1210 15:50:35.054921 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 10 15:50:35 crc kubenswrapper[5114]: I1210 15:50:35.089461 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 10 15:50:35 crc kubenswrapper[5114]: I1210 15:50:35.189142 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 10 15:50:35 crc kubenswrapper[5114]: I1210 15:50:35.503395 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 10 15:50:35 crc kubenswrapper[5114]: I1210 15:50:35.651882 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 10 15:50:35 crc kubenswrapper[5114]: I1210 15:50:35.694431 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 10 15:50:35 crc kubenswrapper[5114]: I1210 15:50:35.990680 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 10 15:50:36 crc kubenswrapper[5114]: I1210 15:50:36.059033 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 10 15:50:36 crc kubenswrapper[5114]: I1210 15:50:36.111420 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 10 15:50:36 crc kubenswrapper[5114]: I1210 15:50:36.336934 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 10 15:50:36 crc kubenswrapper[5114]: I1210 15:50:36.354127 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 10 15:50:36 crc kubenswrapper[5114]: I1210 15:50:36.774862 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 10 15:50:37 crc kubenswrapper[5114]: I1210 15:50:37.758894 5114 ???:1] "http: TLS handshake error from 192.168.126.11:34892: no serving certificate available for the kubelet" Dec 10 15:50:38 crc kubenswrapper[5114]: I1210 15:50:38.480078 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 10 15:50:38 crc kubenswrapper[5114]: I1210 15:50:38.480176 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:50:38 crc kubenswrapper[5114]: I1210 15:50:38.579759 5114 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 10 15:50:38 crc kubenswrapper[5114]: I1210 15:50:38.584462 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 10 15:50:38 crc kubenswrapper[5114]: I1210 15:50:38.584543 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 10 15:50:38 crc kubenswrapper[5114]: I1210 15:50:38.584641 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 10 15:50:38 crc kubenswrapper[5114]: I1210 15:50:38.584714 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 10 15:50:38 crc kubenswrapper[5114]: I1210 15:50:38.584768 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 10 15:50:38 crc kubenswrapper[5114]: I1210 15:50:38.584817 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 15:50:38 crc kubenswrapper[5114]: I1210 15:50:38.584905 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 15:50:38 crc kubenswrapper[5114]: I1210 15:50:38.584906 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 15:50:38 crc kubenswrapper[5114]: I1210 15:50:38.584914 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 15:50:38 crc kubenswrapper[5114]: I1210 15:50:38.585298 5114 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 10 15:50:38 crc kubenswrapper[5114]: I1210 15:50:38.585319 5114 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Dec 10 15:50:38 crc kubenswrapper[5114]: I1210 15:50:38.585331 5114 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Dec 10 15:50:38 crc kubenswrapper[5114]: I1210 15:50:38.585341 5114 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Dec 10 15:50:38 crc kubenswrapper[5114]: I1210 15:50:38.593716 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 15:50:38 crc kubenswrapper[5114]: I1210 15:50:38.595326 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 10 15:50:38 crc kubenswrapper[5114]: I1210 15:50:38.595355 5114 kubelet.go:2759] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="074b6779-4887-4160-b938-2681750584c6" Dec 10 15:50:38 crc kubenswrapper[5114]: I1210 15:50:38.599441 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 10 15:50:38 crc kubenswrapper[5114]: I1210 15:50:38.599467 5114 kubelet.go:2784] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="074b6779-4887-4160-b938-2681750584c6" Dec 10 15:50:38 crc kubenswrapper[5114]: I1210 15:50:38.678555 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 10 15:50:38 crc kubenswrapper[5114]: I1210 15:50:38.678592 5114 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="9c0efc558a013517a41146dfdc36c099a8758437d041575f5b97acda770e3623" exitCode=137 Dec 10 15:50:38 crc kubenswrapper[5114]: I1210 15:50:38.678795 5114 scope.go:117] "RemoveContainer" containerID="9c0efc558a013517a41146dfdc36c099a8758437d041575f5b97acda770e3623" Dec 10 15:50:38 crc kubenswrapper[5114]: I1210 15:50:38.678912 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:50:38 crc kubenswrapper[5114]: I1210 15:50:38.681850 5114 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 10 15:50:38 crc kubenswrapper[5114]: I1210 15:50:38.686861 5114 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 10 15:50:38 crc kubenswrapper[5114]: I1210 15:50:38.693717 5114 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 10 15:50:38 crc kubenswrapper[5114]: I1210 15:50:38.698326 5114 scope.go:117] "RemoveContainer" containerID="9c0efc558a013517a41146dfdc36c099a8758437d041575f5b97acda770e3623" Dec 10 15:50:38 crc kubenswrapper[5114]: E1210 15:50:38.698662 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c0efc558a013517a41146dfdc36c099a8758437d041575f5b97acda770e3623\": container with ID starting with 9c0efc558a013517a41146dfdc36c099a8758437d041575f5b97acda770e3623 not found: ID does not exist" containerID="9c0efc558a013517a41146dfdc36c099a8758437d041575f5b97acda770e3623" Dec 10 15:50:38 crc kubenswrapper[5114]: I1210 15:50:38.698713 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c0efc558a013517a41146dfdc36c099a8758437d041575f5b97acda770e3623"} err="failed to get container status \"9c0efc558a013517a41146dfdc36c099a8758437d041575f5b97acda770e3623\": rpc error: code = NotFound desc = could not find container \"9c0efc558a013517a41146dfdc36c099a8758437d041575f5b97acda770e3623\": container with ID starting with 9c0efc558a013517a41146dfdc36c099a8758437d041575f5b97acda770e3623 not found: ID does not exist" Dec 10 15:50:40 crc kubenswrapper[5114]: I1210 15:50:40.577134 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Dec 10 15:50:51 crc kubenswrapper[5114]: I1210 15:50:51.474242 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 10 15:50:51 crc kubenswrapper[5114]: I1210 15:50:51.877671 5114 patch_prober.go:28] interesting pod/machine-config-daemon-pvhhc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:50:51 crc kubenswrapper[5114]: I1210 15:50:51.877797 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" podUID="b38ac556-07b2-4e25-9595-6adae4fcecb7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:50:53 crc kubenswrapper[5114]: I1210 15:50:53.620401 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 10 15:50:54 crc kubenswrapper[5114]: I1210 15:50:54.708831 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 10 15:50:54 crc kubenswrapper[5114]: I1210 15:50:54.760064 5114 generic.go:358] "Generic (PLEG): container finished" podID="1cce5f28-0219-4980-b7bd-26cbfcbe6435" containerID="d2658ab04cd150979e9d0d56fde13192d480bf4cd98fd857e4bd00bedb87a7b6" exitCode=0 Dec 10 15:50:54 crc kubenswrapper[5114]: I1210 15:50:54.760167 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-wpjqd" event={"ID":"1cce5f28-0219-4980-b7bd-26cbfcbe6435","Type":"ContainerDied","Data":"d2658ab04cd150979e9d0d56fde13192d480bf4cd98fd857e4bd00bedb87a7b6"} Dec 10 15:50:54 crc kubenswrapper[5114]: I1210 15:50:54.760849 5114 scope.go:117] "RemoveContainer" containerID="d2658ab04cd150979e9d0d56fde13192d480bf4cd98fd857e4bd00bedb87a7b6" Dec 10 15:50:55 crc kubenswrapper[5114]: I1210 15:50:55.767487 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-wpjqd" event={"ID":"1cce5f28-0219-4980-b7bd-26cbfcbe6435","Type":"ContainerStarted","Data":"f02be357609b629ed510c6d40545028d59a4926f9ea2fdf791a062cee4e5f274"} Dec 10 15:50:55 crc kubenswrapper[5114]: I1210 15:50:55.768257 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-wpjqd" Dec 10 15:50:55 crc kubenswrapper[5114]: I1210 15:50:55.769551 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-wpjqd" Dec 10 15:50:58 crc kubenswrapper[5114]: I1210 15:50:58.784185 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 10 15:50:58 crc kubenswrapper[5114]: I1210 15:50:58.786420 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 10 15:50:58 crc kubenswrapper[5114]: I1210 15:50:58.786462 5114 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="7cbaf697958fef621c4dcd039fc4c04614d3d35d637358fa8d48a5191ad44814" exitCode=137 Dec 10 15:50:58 crc kubenswrapper[5114]: I1210 15:50:58.786530 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"7cbaf697958fef621c4dcd039fc4c04614d3d35d637358fa8d48a5191ad44814"} Dec 10 15:50:58 crc kubenswrapper[5114]: I1210 15:50:58.786629 5114 scope.go:117] "RemoveContainer" containerID="9ec7a41d072aa02f59def36f4c2802872ef70cbd48046c3e3d6f6ccd6b254c53" Dec 10 15:50:59 crc kubenswrapper[5114]: I1210 15:50:59.016687 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:50:59 crc kubenswrapper[5114]: I1210 15:50:59.793946 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 10 15:50:59 crc kubenswrapper[5114]: I1210 15:50:59.795399 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"6165653cd67617aa3fcfe7f6fc748c02e7703b035d4eb3091dfa87c199eccd7e"} Dec 10 15:51:02 crc kubenswrapper[5114]: I1210 15:51:02.704408 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 10 15:51:05 crc kubenswrapper[5114]: I1210 15:51:05.685564 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:51:06 crc kubenswrapper[5114]: I1210 15:51:06.082546 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 10 15:51:07 crc kubenswrapper[5114]: I1210 15:51:07.025769 5114 ???:1] "http: TLS handshake error from 192.168.126.11:37444: no serving certificate available for the kubelet" Dec 10 15:51:08 crc kubenswrapper[5114]: I1210 15:51:08.346300 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 10 15:51:08 crc kubenswrapper[5114]: I1210 15:51:08.643015 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:51:08 crc kubenswrapper[5114]: I1210 15:51:08.648230 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:51:08 crc kubenswrapper[5114]: I1210 15:51:08.654852 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 10 15:51:11 crc kubenswrapper[5114]: I1210 15:51:11.006834 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:51:14 crc kubenswrapper[5114]: I1210 15:51:14.763933 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 10 15:51:14 crc kubenswrapper[5114]: I1210 15:51:14.773240 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 10 15:51:15 crc kubenswrapper[5114]: I1210 15:51:15.979789 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 10 15:51:16 crc kubenswrapper[5114]: I1210 15:51:16.535261 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fcdb7fc5b-2thsd"] Dec 10 15:51:16 crc kubenswrapper[5114]: I1210 15:51:16.535662 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7fcdb7fc5b-2thsd" podUID="a9f54733-5e32-42a4-9b3c-5545471995a4" containerName="route-controller-manager" containerID="cri-o://8c56294a15dc2a6312e0fe763aeb6ce8c01bb15624905311a8f3082ebbd30009" gracePeriod=30 Dec 10 15:51:16 crc kubenswrapper[5114]: I1210 15:51:16.544216 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs"] Dec 10 15:51:16 crc kubenswrapper[5114]: I1210 15:51:16.545207 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs" podUID="8d238433-d5ee-408e-82a2-79db77556083" containerName="controller-manager" containerID="cri-o://3e26606249e443f3b9c301d5e313e07afa5c59a76fbdb738ff033fd54687c0e1" gracePeriod=30 Dec 10 15:51:16 crc kubenswrapper[5114]: I1210 15:51:16.885622 5114 generic.go:358] "Generic (PLEG): container finished" podID="a9f54733-5e32-42a4-9b3c-5545471995a4" containerID="8c56294a15dc2a6312e0fe763aeb6ce8c01bb15624905311a8f3082ebbd30009" exitCode=0 Dec 10 15:51:16 crc kubenswrapper[5114]: I1210 15:51:16.885842 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fcdb7fc5b-2thsd" event={"ID":"a9f54733-5e32-42a4-9b3c-5545471995a4","Type":"ContainerDied","Data":"8c56294a15dc2a6312e0fe763aeb6ce8c01bb15624905311a8f3082ebbd30009"} Dec 10 15:51:16 crc kubenswrapper[5114]: I1210 15:51:16.890996 5114 generic.go:358] "Generic (PLEG): container finished" podID="8d238433-d5ee-408e-82a2-79db77556083" containerID="3e26606249e443f3b9c301d5e313e07afa5c59a76fbdb738ff033fd54687c0e1" exitCode=0 Dec 10 15:51:16 crc kubenswrapper[5114]: I1210 15:51:16.891044 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs" event={"ID":"8d238433-d5ee-408e-82a2-79db77556083","Type":"ContainerDied","Data":"3e26606249e443f3b9c301d5e313e07afa5c59a76fbdb738ff033fd54687c0e1"} Dec 10 15:51:16 crc kubenswrapper[5114]: I1210 15:51:16.978438 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs" Dec 10 15:51:16 crc kubenswrapper[5114]: I1210 15:51:16.984261 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fcdb7fc5b-2thsd" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.018128 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-55f46964d4-qtf89"] Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.018859 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a9f54733-5e32-42a4-9b3c-5545471995a4" containerName="route-controller-manager" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.018882 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f54733-5e32-42a4-9b3c-5545471995a4" containerName="route-controller-manager" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.018919 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2d48d128-3260-43c9-ab7a-d41717d59b73" containerName="installer" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.018927 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d48d128-3260-43c9-ab7a-d41717d59b73" containerName="installer" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.018935 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8d238433-d5ee-408e-82a2-79db77556083" containerName="controller-manager" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.018943 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d238433-d5ee-408e-82a2-79db77556083" containerName="controller-manager" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.018955 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.018964 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.019067 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.019079 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="a9f54733-5e32-42a4-9b3c-5545471995a4" containerName="route-controller-manager" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.019092 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="2d48d128-3260-43c9-ab7a-d41717d59b73" containerName="installer" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.019103 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="8d238433-d5ee-408e-82a2-79db77556083" containerName="controller-manager" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.023282 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55f46964d4-qtf89" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.025675 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55f46964d4-qtf89"] Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.042985 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7948ccff46-m5976"] Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.054006 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7948ccff46-m5976"] Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.054353 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7948ccff46-m5976" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.067809 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9f54733-5e32-42a4-9b3c-5545471995a4-client-ca\") pod \"a9f54733-5e32-42a4-9b3c-5545471995a4\" (UID: \"a9f54733-5e32-42a4-9b3c-5545471995a4\") " Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.067859 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d238433-d5ee-408e-82a2-79db77556083-config\") pod \"8d238433-d5ee-408e-82a2-79db77556083\" (UID: \"8d238433-d5ee-408e-82a2-79db77556083\") " Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.067929 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9f54733-5e32-42a4-9b3c-5545471995a4-config\") pod \"a9f54733-5e32-42a4-9b3c-5545471995a4\" (UID: \"a9f54733-5e32-42a4-9b3c-5545471995a4\") " Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.067955 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d238433-d5ee-408e-82a2-79db77556083-client-ca\") pod \"8d238433-d5ee-408e-82a2-79db77556083\" (UID: \"8d238433-d5ee-408e-82a2-79db77556083\") " Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.068021 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnwzm\" (UniqueName: \"kubernetes.io/projected/8d238433-d5ee-408e-82a2-79db77556083-kube-api-access-jnwzm\") pod \"8d238433-d5ee-408e-82a2-79db77556083\" (UID: \"8d238433-d5ee-408e-82a2-79db77556083\") " Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.068071 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9f54733-5e32-42a4-9b3c-5545471995a4-serving-cert\") pod \"a9f54733-5e32-42a4-9b3c-5545471995a4\" (UID: \"a9f54733-5e32-42a4-9b3c-5545471995a4\") " Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.068136 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d238433-d5ee-408e-82a2-79db77556083-proxy-ca-bundles\") pod \"8d238433-d5ee-408e-82a2-79db77556083\" (UID: \"8d238433-d5ee-408e-82a2-79db77556083\") " Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.068163 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a9f54733-5e32-42a4-9b3c-5545471995a4-tmp\") pod \"a9f54733-5e32-42a4-9b3c-5545471995a4\" (UID: \"a9f54733-5e32-42a4-9b3c-5545471995a4\") " Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.068203 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8d238433-d5ee-408e-82a2-79db77556083-tmp\") pod \"8d238433-d5ee-408e-82a2-79db77556083\" (UID: \"8d238433-d5ee-408e-82a2-79db77556083\") " Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.068243 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z74db\" (UniqueName: \"kubernetes.io/projected/a9f54733-5e32-42a4-9b3c-5545471995a4-kube-api-access-z74db\") pod \"a9f54733-5e32-42a4-9b3c-5545471995a4\" (UID: \"a9f54733-5e32-42a4-9b3c-5545471995a4\") " Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.068289 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d238433-d5ee-408e-82a2-79db77556083-serving-cert\") pod \"8d238433-d5ee-408e-82a2-79db77556083\" (UID: \"8d238433-d5ee-408e-82a2-79db77556083\") " Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.071341 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9f54733-5e32-42a4-9b3c-5545471995a4-tmp" (OuterVolumeSpecName: "tmp") pod "a9f54733-5e32-42a4-9b3c-5545471995a4" (UID: "a9f54733-5e32-42a4-9b3c-5545471995a4"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.071766 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d238433-d5ee-408e-82a2-79db77556083-tmp" (OuterVolumeSpecName: "tmp") pod "8d238433-d5ee-408e-82a2-79db77556083" (UID: "8d238433-d5ee-408e-82a2-79db77556083"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.071820 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d238433-d5ee-408e-82a2-79db77556083-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8d238433-d5ee-408e-82a2-79db77556083" (UID: "8d238433-d5ee-408e-82a2-79db77556083"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.071854 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d238433-d5ee-408e-82a2-79db77556083-client-ca" (OuterVolumeSpecName: "client-ca") pod "8d238433-d5ee-408e-82a2-79db77556083" (UID: "8d238433-d5ee-408e-82a2-79db77556083"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.072567 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9f54733-5e32-42a4-9b3c-5545471995a4-client-ca" (OuterVolumeSpecName: "client-ca") pod "a9f54733-5e32-42a4-9b3c-5545471995a4" (UID: "a9f54733-5e32-42a4-9b3c-5545471995a4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.072537 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9f54733-5e32-42a4-9b3c-5545471995a4-config" (OuterVolumeSpecName: "config") pod "a9f54733-5e32-42a4-9b3c-5545471995a4" (UID: "a9f54733-5e32-42a4-9b3c-5545471995a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.072654 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d238433-d5ee-408e-82a2-79db77556083-config" (OuterVolumeSpecName: "config") pod "8d238433-d5ee-408e-82a2-79db77556083" (UID: "8d238433-d5ee-408e-82a2-79db77556083"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.078032 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9f54733-5e32-42a4-9b3c-5545471995a4-kube-api-access-z74db" (OuterVolumeSpecName: "kube-api-access-z74db") pod "a9f54733-5e32-42a4-9b3c-5545471995a4" (UID: "a9f54733-5e32-42a4-9b3c-5545471995a4"). InnerVolumeSpecName "kube-api-access-z74db". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.078045 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9f54733-5e32-42a4-9b3c-5545471995a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a9f54733-5e32-42a4-9b3c-5545471995a4" (UID: "a9f54733-5e32-42a4-9b3c-5545471995a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.078045 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d238433-d5ee-408e-82a2-79db77556083-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8d238433-d5ee-408e-82a2-79db77556083" (UID: "8d238433-d5ee-408e-82a2-79db77556083"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.078467 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d238433-d5ee-408e-82a2-79db77556083-kube-api-access-jnwzm" (OuterVolumeSpecName: "kube-api-access-jnwzm") pod "8d238433-d5ee-408e-82a2-79db77556083" (UID: "8d238433-d5ee-408e-82a2-79db77556083"). InnerVolumeSpecName "kube-api-access-jnwzm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.169493 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/23b2da7b-147d-448f-b235-9120d377e780-client-ca\") pod \"controller-manager-55f46964d4-qtf89\" (UID: \"23b2da7b-147d-448f-b235-9120d377e780\") " pod="openshift-controller-manager/controller-manager-55f46964d4-qtf89" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.169566 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/507e254e-a50b-439b-a82e-533352a37cf0-config\") pod \"route-controller-manager-7948ccff46-m5976\" (UID: \"507e254e-a50b-439b-a82e-533352a37cf0\") " pod="openshift-route-controller-manager/route-controller-manager-7948ccff46-m5976" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.169611 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-487fc\" (UniqueName: \"kubernetes.io/projected/507e254e-a50b-439b-a82e-533352a37cf0-kube-api-access-487fc\") pod \"route-controller-manager-7948ccff46-m5976\" (UID: \"507e254e-a50b-439b-a82e-533352a37cf0\") " pod="openshift-route-controller-manager/route-controller-manager-7948ccff46-m5976" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.169628 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/23b2da7b-147d-448f-b235-9120d377e780-proxy-ca-bundles\") pod \"controller-manager-55f46964d4-qtf89\" (UID: \"23b2da7b-147d-448f-b235-9120d377e780\") " pod="openshift-controller-manager/controller-manager-55f46964d4-qtf89" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.169651 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23b2da7b-147d-448f-b235-9120d377e780-serving-cert\") pod \"controller-manager-55f46964d4-qtf89\" (UID: \"23b2da7b-147d-448f-b235-9120d377e780\") " pod="openshift-controller-manager/controller-manager-55f46964d4-qtf89" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.169696 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/507e254e-a50b-439b-a82e-533352a37cf0-client-ca\") pod \"route-controller-manager-7948ccff46-m5976\" (UID: \"507e254e-a50b-439b-a82e-533352a37cf0\") " pod="openshift-route-controller-manager/route-controller-manager-7948ccff46-m5976" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.169715 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/507e254e-a50b-439b-a82e-533352a37cf0-serving-cert\") pod \"route-controller-manager-7948ccff46-m5976\" (UID: \"507e254e-a50b-439b-a82e-533352a37cf0\") " pod="openshift-route-controller-manager/route-controller-manager-7948ccff46-m5976" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.169765 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/23b2da7b-147d-448f-b235-9120d377e780-tmp\") pod \"controller-manager-55f46964d4-qtf89\" (UID: \"23b2da7b-147d-448f-b235-9120d377e780\") " pod="openshift-controller-manager/controller-manager-55f46964d4-qtf89" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.169803 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/507e254e-a50b-439b-a82e-533352a37cf0-tmp\") pod \"route-controller-manager-7948ccff46-m5976\" (UID: \"507e254e-a50b-439b-a82e-533352a37cf0\") " pod="openshift-route-controller-manager/route-controller-manager-7948ccff46-m5976" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.169915 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23b2da7b-147d-448f-b235-9120d377e780-config\") pod \"controller-manager-55f46964d4-qtf89\" (UID: \"23b2da7b-147d-448f-b235-9120d377e780\") " pod="openshift-controller-manager/controller-manager-55f46964d4-qtf89" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.169959 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vltgn\" (UniqueName: \"kubernetes.io/projected/23b2da7b-147d-448f-b235-9120d377e780-kube-api-access-vltgn\") pod \"controller-manager-55f46964d4-qtf89\" (UID: \"23b2da7b-147d-448f-b235-9120d377e780\") " pod="openshift-controller-manager/controller-manager-55f46964d4-qtf89" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.170060 5114 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9f54733-5e32-42a4-9b3c-5545471995a4-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.170120 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d238433-d5ee-408e-82a2-79db77556083-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.170134 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9f54733-5e32-42a4-9b3c-5545471995a4-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.170145 5114 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d238433-d5ee-408e-82a2-79db77556083-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.170161 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jnwzm\" (UniqueName: \"kubernetes.io/projected/8d238433-d5ee-408e-82a2-79db77556083-kube-api-access-jnwzm\") on node \"crc\" DevicePath \"\"" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.170175 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9f54733-5e32-42a4-9b3c-5545471995a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.170185 5114 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d238433-d5ee-408e-82a2-79db77556083-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.170197 5114 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a9f54733-5e32-42a4-9b3c-5545471995a4-tmp\") on node \"crc\" DevicePath \"\"" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.170207 5114 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8d238433-d5ee-408e-82a2-79db77556083-tmp\") on node \"crc\" DevicePath \"\"" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.170218 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z74db\" (UniqueName: \"kubernetes.io/projected/a9f54733-5e32-42a4-9b3c-5545471995a4-kube-api-access-z74db\") on node \"crc\" DevicePath \"\"" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.170229 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d238433-d5ee-408e-82a2-79db77556083-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.271393 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/23b2da7b-147d-448f-b235-9120d377e780-tmp\") pod \"controller-manager-55f46964d4-qtf89\" (UID: \"23b2da7b-147d-448f-b235-9120d377e780\") " pod="openshift-controller-manager/controller-manager-55f46964d4-qtf89" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.271468 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/507e254e-a50b-439b-a82e-533352a37cf0-tmp\") pod \"route-controller-manager-7948ccff46-m5976\" (UID: \"507e254e-a50b-439b-a82e-533352a37cf0\") " pod="openshift-route-controller-manager/route-controller-manager-7948ccff46-m5976" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.271494 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23b2da7b-147d-448f-b235-9120d377e780-config\") pod \"controller-manager-55f46964d4-qtf89\" (UID: \"23b2da7b-147d-448f-b235-9120d377e780\") " pod="openshift-controller-manager/controller-manager-55f46964d4-qtf89" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.271513 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vltgn\" (UniqueName: \"kubernetes.io/projected/23b2da7b-147d-448f-b235-9120d377e780-kube-api-access-vltgn\") pod \"controller-manager-55f46964d4-qtf89\" (UID: \"23b2da7b-147d-448f-b235-9120d377e780\") " pod="openshift-controller-manager/controller-manager-55f46964d4-qtf89" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.271546 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/23b2da7b-147d-448f-b235-9120d377e780-client-ca\") pod \"controller-manager-55f46964d4-qtf89\" (UID: \"23b2da7b-147d-448f-b235-9120d377e780\") " pod="openshift-controller-manager/controller-manager-55f46964d4-qtf89" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.271576 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/507e254e-a50b-439b-a82e-533352a37cf0-config\") pod \"route-controller-manager-7948ccff46-m5976\" (UID: \"507e254e-a50b-439b-a82e-533352a37cf0\") " pod="openshift-route-controller-manager/route-controller-manager-7948ccff46-m5976" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.271622 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-487fc\" (UniqueName: \"kubernetes.io/projected/507e254e-a50b-439b-a82e-533352a37cf0-kube-api-access-487fc\") pod \"route-controller-manager-7948ccff46-m5976\" (UID: \"507e254e-a50b-439b-a82e-533352a37cf0\") " pod="openshift-route-controller-manager/route-controller-manager-7948ccff46-m5976" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.271637 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/23b2da7b-147d-448f-b235-9120d377e780-proxy-ca-bundles\") pod \"controller-manager-55f46964d4-qtf89\" (UID: \"23b2da7b-147d-448f-b235-9120d377e780\") " pod="openshift-controller-manager/controller-manager-55f46964d4-qtf89" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.271660 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23b2da7b-147d-448f-b235-9120d377e780-serving-cert\") pod \"controller-manager-55f46964d4-qtf89\" (UID: \"23b2da7b-147d-448f-b235-9120d377e780\") " pod="openshift-controller-manager/controller-manager-55f46964d4-qtf89" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.271706 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/507e254e-a50b-439b-a82e-533352a37cf0-client-ca\") pod \"route-controller-manager-7948ccff46-m5976\" (UID: \"507e254e-a50b-439b-a82e-533352a37cf0\") " pod="openshift-route-controller-manager/route-controller-manager-7948ccff46-m5976" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.271723 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/507e254e-a50b-439b-a82e-533352a37cf0-serving-cert\") pod \"route-controller-manager-7948ccff46-m5976\" (UID: \"507e254e-a50b-439b-a82e-533352a37cf0\") " pod="openshift-route-controller-manager/route-controller-manager-7948ccff46-m5976" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.273244 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/23b2da7b-147d-448f-b235-9120d377e780-client-ca\") pod \"controller-manager-55f46964d4-qtf89\" (UID: \"23b2da7b-147d-448f-b235-9120d377e780\") " pod="openshift-controller-manager/controller-manager-55f46964d4-qtf89" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.273314 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/23b2da7b-147d-448f-b235-9120d377e780-tmp\") pod \"controller-manager-55f46964d4-qtf89\" (UID: \"23b2da7b-147d-448f-b235-9120d377e780\") " pod="openshift-controller-manager/controller-manager-55f46964d4-qtf89" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.273323 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/507e254e-a50b-439b-a82e-533352a37cf0-tmp\") pod \"route-controller-manager-7948ccff46-m5976\" (UID: \"507e254e-a50b-439b-a82e-533352a37cf0\") " pod="openshift-route-controller-manager/route-controller-manager-7948ccff46-m5976" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.273428 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/507e254e-a50b-439b-a82e-533352a37cf0-client-ca\") pod \"route-controller-manager-7948ccff46-m5976\" (UID: \"507e254e-a50b-439b-a82e-533352a37cf0\") " pod="openshift-route-controller-manager/route-controller-manager-7948ccff46-m5976" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.273934 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/507e254e-a50b-439b-a82e-533352a37cf0-config\") pod \"route-controller-manager-7948ccff46-m5976\" (UID: \"507e254e-a50b-439b-a82e-533352a37cf0\") " pod="openshift-route-controller-manager/route-controller-manager-7948ccff46-m5976" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.274409 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23b2da7b-147d-448f-b235-9120d377e780-config\") pod \"controller-manager-55f46964d4-qtf89\" (UID: \"23b2da7b-147d-448f-b235-9120d377e780\") " pod="openshift-controller-manager/controller-manager-55f46964d4-qtf89" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.274948 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/23b2da7b-147d-448f-b235-9120d377e780-proxy-ca-bundles\") pod \"controller-manager-55f46964d4-qtf89\" (UID: \"23b2da7b-147d-448f-b235-9120d377e780\") " pod="openshift-controller-manager/controller-manager-55f46964d4-qtf89" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.291367 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-487fc\" (UniqueName: \"kubernetes.io/projected/507e254e-a50b-439b-a82e-533352a37cf0-kube-api-access-487fc\") pod \"route-controller-manager-7948ccff46-m5976\" (UID: \"507e254e-a50b-439b-a82e-533352a37cf0\") " pod="openshift-route-controller-manager/route-controller-manager-7948ccff46-m5976" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.294917 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vltgn\" (UniqueName: \"kubernetes.io/projected/23b2da7b-147d-448f-b235-9120d377e780-kube-api-access-vltgn\") pod \"controller-manager-55f46964d4-qtf89\" (UID: \"23b2da7b-147d-448f-b235-9120d377e780\") " pod="openshift-controller-manager/controller-manager-55f46964d4-qtf89" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.311682 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23b2da7b-147d-448f-b235-9120d377e780-serving-cert\") pod \"controller-manager-55f46964d4-qtf89\" (UID: \"23b2da7b-147d-448f-b235-9120d377e780\") " pod="openshift-controller-manager/controller-manager-55f46964d4-qtf89" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.311694 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/507e254e-a50b-439b-a82e-533352a37cf0-serving-cert\") pod \"route-controller-manager-7948ccff46-m5976\" (UID: \"507e254e-a50b-439b-a82e-533352a37cf0\") " pod="openshift-route-controller-manager/route-controller-manager-7948ccff46-m5976" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.347818 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55f46964d4-qtf89" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.398715 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7948ccff46-m5976" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.637573 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7948ccff46-m5976"] Dec 10 15:51:17 crc kubenswrapper[5114]: W1210 15:51:17.641541 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod507e254e_a50b_439b_a82e_533352a37cf0.slice/crio-a222d3460824cd6e2de2b6b500aae3a0e3e696201c16fd610ac8e64e00bb151c WatchSource:0}: Error finding container a222d3460824cd6e2de2b6b500aae3a0e3e696201c16fd610ac8e64e00bb151c: Status 404 returned error can't find the container with id a222d3460824cd6e2de2b6b500aae3a0e3e696201c16fd610ac8e64e00bb151c Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.644137 5114 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.756315 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55f46964d4-qtf89"] Dec 10 15:51:17 crc kubenswrapper[5114]: W1210 15:51:17.763333 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23b2da7b_147d_448f_b235_9120d377e780.slice/crio-566c06dc616318da857c9822f95dedba450898a165e216d073aea628916d40db WatchSource:0}: Error finding container 566c06dc616318da857c9822f95dedba450898a165e216d073aea628916d40db: Status 404 returned error can't find the container with id 566c06dc616318da857c9822f95dedba450898a165e216d073aea628916d40db Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.897490 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fcdb7fc5b-2thsd" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.897506 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fcdb7fc5b-2thsd" event={"ID":"a9f54733-5e32-42a4-9b3c-5545471995a4","Type":"ContainerDied","Data":"3d685e0d6ecd26fc79d758e6b1dcf84b7ac3a1ac5e12184a984dc49ab1e0c8fa"} Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.897824 5114 scope.go:117] "RemoveContainer" containerID="8c56294a15dc2a6312e0fe763aeb6ce8c01bb15624905311a8f3082ebbd30009" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.899031 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7948ccff46-m5976" event={"ID":"507e254e-a50b-439b-a82e-533352a37cf0","Type":"ContainerStarted","Data":"506ee645d535ef62301bdf32a89521518104b57eb990a683d0b6e1e95ba384e8"} Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.899060 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7948ccff46-m5976" event={"ID":"507e254e-a50b-439b-a82e-533352a37cf0","Type":"ContainerStarted","Data":"a222d3460824cd6e2de2b6b500aae3a0e3e696201c16fd610ac8e64e00bb151c"} Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.899238 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-7948ccff46-m5976" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.903037 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55f46964d4-qtf89" event={"ID":"23b2da7b-147d-448f-b235-9120d377e780","Type":"ContainerStarted","Data":"09e5c75ad192a96aa521ba529d1d4e44c3b80c7c09e19de811f14e3046a4d179"} Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.903072 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55f46964d4-qtf89" event={"ID":"23b2da7b-147d-448f-b235-9120d377e780","Type":"ContainerStarted","Data":"566c06dc616318da857c9822f95dedba450898a165e216d073aea628916d40db"} Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.903235 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-55f46964d4-qtf89" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.904486 5114 patch_prober.go:28] interesting pod/controller-manager-55f46964d4-qtf89 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" start-of-body= Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.904531 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-55f46964d4-qtf89" podUID="23b2da7b-147d-448f-b235-9120d377e780" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.905889 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs" event={"ID":"8d238433-d5ee-408e-82a2-79db77556083","Type":"ContainerDied","Data":"f9cec6be2280639a5a788e6c822385396333c70a4224a3fd4cf8bd983549a2fe"} Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.906038 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.913077 5114 scope.go:117] "RemoveContainer" containerID="3e26606249e443f3b9c301d5e313e07afa5c59a76fbdb738ff033fd54687c0e1" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.917048 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7948ccff46-m5976" podStartSLOduration=1.917032984 podStartE2EDuration="1.917032984s" podCreationTimestamp="2025-12-10 15:51:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:51:17.916543582 +0000 UTC m=+303.637344769" watchObservedRunningTime="2025-12-10 15:51:17.917032984 +0000 UTC m=+303.637834161" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.933784 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-55f46964d4-qtf89" podStartSLOduration=1.933770477 podStartE2EDuration="1.933770477s" podCreationTimestamp="2025-12-10 15:51:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:51:17.932809513 +0000 UTC m=+303.653610710" watchObservedRunningTime="2025-12-10 15:51:17.933770477 +0000 UTC m=+303.654571644" Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.949646 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs"] Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.954153 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5f8dcf6c95-hrkgs"] Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.958217 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fcdb7fc5b-2thsd"] Dec 10 15:51:17 crc kubenswrapper[5114]: I1210 15:51:17.961716 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fcdb7fc5b-2thsd"] Dec 10 15:51:18 crc kubenswrapper[5114]: I1210 15:51:18.478158 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7948ccff46-m5976" Dec 10 15:51:18 crc kubenswrapper[5114]: I1210 15:51:18.575903 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d238433-d5ee-408e-82a2-79db77556083" path="/var/lib/kubelet/pods/8d238433-d5ee-408e-82a2-79db77556083/volumes" Dec 10 15:51:18 crc kubenswrapper[5114]: I1210 15:51:18.576692 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9f54733-5e32-42a4-9b3c-5545471995a4" path="/var/lib/kubelet/pods/a9f54733-5e32-42a4-9b3c-5545471995a4/volumes" Dec 10 15:51:18 crc kubenswrapper[5114]: I1210 15:51:18.845902 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:51:18 crc kubenswrapper[5114]: I1210 15:51:18.921527 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-55f46964d4-qtf89" Dec 10 15:51:21 crc kubenswrapper[5114]: I1210 15:51:21.876213 5114 patch_prober.go:28] interesting pod/machine-config-daemon-pvhhc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:51:21 crc kubenswrapper[5114]: I1210 15:51:21.876601 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" podUID="b38ac556-07b2-4e25-9595-6adae4fcecb7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:51:21 crc kubenswrapper[5114]: I1210 15:51:21.876652 5114 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" Dec 10 15:51:21 crc kubenswrapper[5114]: I1210 15:51:21.877188 5114 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"95aa66cb5f9214a9386ee8d4b2b98700f1848f272307ff884ab628c7ebd98b08"} pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 15:51:21 crc kubenswrapper[5114]: I1210 15:51:21.877246 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" podUID="b38ac556-07b2-4e25-9595-6adae4fcecb7" containerName="machine-config-daemon" containerID="cri-o://95aa66cb5f9214a9386ee8d4b2b98700f1848f272307ff884ab628c7ebd98b08" gracePeriod=600 Dec 10 15:51:22 crc kubenswrapper[5114]: I1210 15:51:22.935242 5114 generic.go:358] "Generic (PLEG): container finished" podID="b38ac556-07b2-4e25-9595-6adae4fcecb7" containerID="95aa66cb5f9214a9386ee8d4b2b98700f1848f272307ff884ab628c7ebd98b08" exitCode=0 Dec 10 15:51:22 crc kubenswrapper[5114]: I1210 15:51:22.935364 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" event={"ID":"b38ac556-07b2-4e25-9595-6adae4fcecb7","Type":"ContainerDied","Data":"95aa66cb5f9214a9386ee8d4b2b98700f1848f272307ff884ab628c7ebd98b08"} Dec 10 15:51:22 crc kubenswrapper[5114]: I1210 15:51:22.935847 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" event={"ID":"b38ac556-07b2-4e25-9595-6adae4fcecb7","Type":"ContainerStarted","Data":"5e07ecebaefcbce405d9057363dc3db1ce0048acc851762d96cdc6cf35b9afd8"} Dec 10 15:51:26 crc kubenswrapper[5114]: I1210 15:51:26.764548 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7948ccff46-m5976"] Dec 10 15:51:26 crc kubenswrapper[5114]: I1210 15:51:26.765079 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7948ccff46-m5976" podUID="507e254e-a50b-439b-a82e-533352a37cf0" containerName="route-controller-manager" containerID="cri-o://506ee645d535ef62301bdf32a89521518104b57eb990a683d0b6e1e95ba384e8" gracePeriod=30 Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.331773 5114 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.674932 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7948ccff46-m5976" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.713519 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55f5787c6b-9h7zb"] Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.714253 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="507e254e-a50b-439b-a82e-533352a37cf0" containerName="route-controller-manager" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.714267 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="507e254e-a50b-439b-a82e-533352a37cf0" containerName="route-controller-manager" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.714417 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="507e254e-a50b-439b-a82e-533352a37cf0" containerName="route-controller-manager" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.722691 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55f5787c6b-9h7zb" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.723418 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-487fc\" (UniqueName: \"kubernetes.io/projected/507e254e-a50b-439b-a82e-533352a37cf0-kube-api-access-487fc\") pod \"507e254e-a50b-439b-a82e-533352a37cf0\" (UID: \"507e254e-a50b-439b-a82e-533352a37cf0\") " Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.723485 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/507e254e-a50b-439b-a82e-533352a37cf0-serving-cert\") pod \"507e254e-a50b-439b-a82e-533352a37cf0\" (UID: \"507e254e-a50b-439b-a82e-533352a37cf0\") " Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.723521 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/507e254e-a50b-439b-a82e-533352a37cf0-config\") pod \"507e254e-a50b-439b-a82e-533352a37cf0\" (UID: \"507e254e-a50b-439b-a82e-533352a37cf0\") " Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.723548 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/507e254e-a50b-439b-a82e-533352a37cf0-tmp\") pod \"507e254e-a50b-439b-a82e-533352a37cf0\" (UID: \"507e254e-a50b-439b-a82e-533352a37cf0\") " Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.723641 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/507e254e-a50b-439b-a82e-533352a37cf0-client-ca\") pod \"507e254e-a50b-439b-a82e-533352a37cf0\" (UID: \"507e254e-a50b-439b-a82e-533352a37cf0\") " Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.723788 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ecc0637-14ca-4454-9564-8b6c143a0397-config\") pod \"route-controller-manager-55f5787c6b-9h7zb\" (UID: \"4ecc0637-14ca-4454-9564-8b6c143a0397\") " pod="openshift-route-controller-manager/route-controller-manager-55f5787c6b-9h7zb" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.723835 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ecc0637-14ca-4454-9564-8b6c143a0397-client-ca\") pod \"route-controller-manager-55f5787c6b-9h7zb\" (UID: \"4ecc0637-14ca-4454-9564-8b6c143a0397\") " pod="openshift-route-controller-manager/route-controller-manager-55f5787c6b-9h7zb" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.723879 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jzlj\" (UniqueName: \"kubernetes.io/projected/4ecc0637-14ca-4454-9564-8b6c143a0397-kube-api-access-8jzlj\") pod \"route-controller-manager-55f5787c6b-9h7zb\" (UID: \"4ecc0637-14ca-4454-9564-8b6c143a0397\") " pod="openshift-route-controller-manager/route-controller-manager-55f5787c6b-9h7zb" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.723906 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ecc0637-14ca-4454-9564-8b6c143a0397-serving-cert\") pod \"route-controller-manager-55f5787c6b-9h7zb\" (UID: \"4ecc0637-14ca-4454-9564-8b6c143a0397\") " pod="openshift-route-controller-manager/route-controller-manager-55f5787c6b-9h7zb" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.723945 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ecc0637-14ca-4454-9564-8b6c143a0397-tmp\") pod \"route-controller-manager-55f5787c6b-9h7zb\" (UID: \"4ecc0637-14ca-4454-9564-8b6c143a0397\") " pod="openshift-route-controller-manager/route-controller-manager-55f5787c6b-9h7zb" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.727208 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/507e254e-a50b-439b-a82e-533352a37cf0-tmp" (OuterVolumeSpecName: "tmp") pod "507e254e-a50b-439b-a82e-533352a37cf0" (UID: "507e254e-a50b-439b-a82e-533352a37cf0"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.727821 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/507e254e-a50b-439b-a82e-533352a37cf0-config" (OuterVolumeSpecName: "config") pod "507e254e-a50b-439b-a82e-533352a37cf0" (UID: "507e254e-a50b-439b-a82e-533352a37cf0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.728253 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/507e254e-a50b-439b-a82e-533352a37cf0-client-ca" (OuterVolumeSpecName: "client-ca") pod "507e254e-a50b-439b-a82e-533352a37cf0" (UID: "507e254e-a50b-439b-a82e-533352a37cf0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.728887 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55f5787c6b-9h7zb"] Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.733834 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/507e254e-a50b-439b-a82e-533352a37cf0-kube-api-access-487fc" (OuterVolumeSpecName: "kube-api-access-487fc") pod "507e254e-a50b-439b-a82e-533352a37cf0" (UID: "507e254e-a50b-439b-a82e-533352a37cf0"). InnerVolumeSpecName "kube-api-access-487fc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.734475 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/507e254e-a50b-439b-a82e-533352a37cf0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "507e254e-a50b-439b-a82e-533352a37cf0" (UID: "507e254e-a50b-439b-a82e-533352a37cf0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.824694 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ecc0637-14ca-4454-9564-8b6c143a0397-client-ca\") pod \"route-controller-manager-55f5787c6b-9h7zb\" (UID: \"4ecc0637-14ca-4454-9564-8b6c143a0397\") " pod="openshift-route-controller-manager/route-controller-manager-55f5787c6b-9h7zb" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.824785 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8jzlj\" (UniqueName: \"kubernetes.io/projected/4ecc0637-14ca-4454-9564-8b6c143a0397-kube-api-access-8jzlj\") pod \"route-controller-manager-55f5787c6b-9h7zb\" (UID: \"4ecc0637-14ca-4454-9564-8b6c143a0397\") " pod="openshift-route-controller-manager/route-controller-manager-55f5787c6b-9h7zb" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.824816 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ecc0637-14ca-4454-9564-8b6c143a0397-serving-cert\") pod \"route-controller-manager-55f5787c6b-9h7zb\" (UID: \"4ecc0637-14ca-4454-9564-8b6c143a0397\") " pod="openshift-route-controller-manager/route-controller-manager-55f5787c6b-9h7zb" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.824854 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ecc0637-14ca-4454-9564-8b6c143a0397-tmp\") pod \"route-controller-manager-55f5787c6b-9h7zb\" (UID: \"4ecc0637-14ca-4454-9564-8b6c143a0397\") " pod="openshift-route-controller-manager/route-controller-manager-55f5787c6b-9h7zb" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.825077 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ecc0637-14ca-4454-9564-8b6c143a0397-config\") pod \"route-controller-manager-55f5787c6b-9h7zb\" (UID: \"4ecc0637-14ca-4454-9564-8b6c143a0397\") " pod="openshift-route-controller-manager/route-controller-manager-55f5787c6b-9h7zb" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.825227 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/507e254e-a50b-439b-a82e-533352a37cf0-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.825241 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/507e254e-a50b-439b-a82e-533352a37cf0-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.825250 5114 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/507e254e-a50b-439b-a82e-533352a37cf0-tmp\") on node \"crc\" DevicePath \"\"" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.825261 5114 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/507e254e-a50b-439b-a82e-533352a37cf0-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.825286 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-487fc\" (UniqueName: \"kubernetes.io/projected/507e254e-a50b-439b-a82e-533352a37cf0-kube-api-access-487fc\") on node \"crc\" DevicePath \"\"" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.825482 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ecc0637-14ca-4454-9564-8b6c143a0397-tmp\") pod \"route-controller-manager-55f5787c6b-9h7zb\" (UID: \"4ecc0637-14ca-4454-9564-8b6c143a0397\") " pod="openshift-route-controller-manager/route-controller-manager-55f5787c6b-9h7zb" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.825862 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ecc0637-14ca-4454-9564-8b6c143a0397-client-ca\") pod \"route-controller-manager-55f5787c6b-9h7zb\" (UID: \"4ecc0637-14ca-4454-9564-8b6c143a0397\") " pod="openshift-route-controller-manager/route-controller-manager-55f5787c6b-9h7zb" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.826360 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ecc0637-14ca-4454-9564-8b6c143a0397-config\") pod \"route-controller-manager-55f5787c6b-9h7zb\" (UID: \"4ecc0637-14ca-4454-9564-8b6c143a0397\") " pod="openshift-route-controller-manager/route-controller-manager-55f5787c6b-9h7zb" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.830090 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ecc0637-14ca-4454-9564-8b6c143a0397-serving-cert\") pod \"route-controller-manager-55f5787c6b-9h7zb\" (UID: \"4ecc0637-14ca-4454-9564-8b6c143a0397\") " pod="openshift-route-controller-manager/route-controller-manager-55f5787c6b-9h7zb" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.843367 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jzlj\" (UniqueName: \"kubernetes.io/projected/4ecc0637-14ca-4454-9564-8b6c143a0397-kube-api-access-8jzlj\") pod \"route-controller-manager-55f5787c6b-9h7zb\" (UID: \"4ecc0637-14ca-4454-9564-8b6c143a0397\") " pod="openshift-route-controller-manager/route-controller-manager-55f5787c6b-9h7zb" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.962131 5114 generic.go:358] "Generic (PLEG): container finished" podID="507e254e-a50b-439b-a82e-533352a37cf0" containerID="506ee645d535ef62301bdf32a89521518104b57eb990a683d0b6e1e95ba384e8" exitCode=0 Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.962187 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7948ccff46-m5976" event={"ID":"507e254e-a50b-439b-a82e-533352a37cf0","Type":"ContainerDied","Data":"506ee645d535ef62301bdf32a89521518104b57eb990a683d0b6e1e95ba384e8"} Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.962213 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7948ccff46-m5976" event={"ID":"507e254e-a50b-439b-a82e-533352a37cf0","Type":"ContainerDied","Data":"a222d3460824cd6e2de2b6b500aae3a0e3e696201c16fd610ac8e64e00bb151c"} Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.962219 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7948ccff46-m5976" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.962233 5114 scope.go:117] "RemoveContainer" containerID="506ee645d535ef62301bdf32a89521518104b57eb990a683d0b6e1e95ba384e8" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.982996 5114 scope.go:117] "RemoveContainer" containerID="506ee645d535ef62301bdf32a89521518104b57eb990a683d0b6e1e95ba384e8" Dec 10 15:51:27 crc kubenswrapper[5114]: E1210 15:51:27.983526 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"506ee645d535ef62301bdf32a89521518104b57eb990a683d0b6e1e95ba384e8\": container with ID starting with 506ee645d535ef62301bdf32a89521518104b57eb990a683d0b6e1e95ba384e8 not found: ID does not exist" containerID="506ee645d535ef62301bdf32a89521518104b57eb990a683d0b6e1e95ba384e8" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.983558 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"506ee645d535ef62301bdf32a89521518104b57eb990a683d0b6e1e95ba384e8"} err="failed to get container status \"506ee645d535ef62301bdf32a89521518104b57eb990a683d0b6e1e95ba384e8\": rpc error: code = NotFound desc = could not find container \"506ee645d535ef62301bdf32a89521518104b57eb990a683d0b6e1e95ba384e8\": container with ID starting with 506ee645d535ef62301bdf32a89521518104b57eb990a683d0b6e1e95ba384e8 not found: ID does not exist" Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.994343 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7948ccff46-m5976"] Dec 10 15:51:27 crc kubenswrapper[5114]: I1210 15:51:27.997828 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7948ccff46-m5976"] Dec 10 15:51:28 crc kubenswrapper[5114]: I1210 15:51:28.085486 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55f5787c6b-9h7zb" Dec 10 15:51:28 crc kubenswrapper[5114]: I1210 15:51:28.476071 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55f5787c6b-9h7zb"] Dec 10 15:51:28 crc kubenswrapper[5114]: I1210 15:51:28.580686 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="507e254e-a50b-439b-a82e-533352a37cf0" path="/var/lib/kubelet/pods/507e254e-a50b-439b-a82e-533352a37cf0/volumes" Dec 10 15:51:28 crc kubenswrapper[5114]: I1210 15:51:28.968953 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55f5787c6b-9h7zb" event={"ID":"4ecc0637-14ca-4454-9564-8b6c143a0397","Type":"ContainerStarted","Data":"416f586bf950755b2b0ee7feb0c8c5b2c05b43a5627bfbebbd994abbc8e10328"} Dec 10 15:51:28 crc kubenswrapper[5114]: I1210 15:51:28.969338 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-55f5787c6b-9h7zb" Dec 10 15:51:28 crc kubenswrapper[5114]: I1210 15:51:28.969355 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55f5787c6b-9h7zb" event={"ID":"4ecc0637-14ca-4454-9564-8b6c143a0397","Type":"ContainerStarted","Data":"8fcf7640f1f283eda6201244eadcdb22702a95f2df5181dbf4f342790a3a3c09"} Dec 10 15:51:28 crc kubenswrapper[5114]: I1210 15:51:28.975355 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-55f5787c6b-9h7zb" Dec 10 15:51:28 crc kubenswrapper[5114]: I1210 15:51:28.993026 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-55f5787c6b-9h7zb" podStartSLOduration=2.993006292 podStartE2EDuration="2.993006292s" podCreationTimestamp="2025-12-10 15:51:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:51:28.988441757 +0000 UTC m=+314.709242954" watchObservedRunningTime="2025-12-10 15:51:28.993006292 +0000 UTC m=+314.713807469" Dec 10 15:51:56 crc kubenswrapper[5114]: I1210 15:51:56.176864 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-55f46964d4-qtf89"] Dec 10 15:51:56 crc kubenswrapper[5114]: I1210 15:51:56.177620 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-55f46964d4-qtf89" podUID="23b2da7b-147d-448f-b235-9120d377e780" containerName="controller-manager" containerID="cri-o://09e5c75ad192a96aa521ba529d1d4e44c3b80c7c09e19de811f14e3046a4d179" gracePeriod=30 Dec 10 15:51:56 crc kubenswrapper[5114]: I1210 15:51:56.775325 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55f46964d4-qtf89" Dec 10 15:51:56 crc kubenswrapper[5114]: I1210 15:51:56.799157 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f6665dd78-pqlm4"] Dec 10 15:51:56 crc kubenswrapper[5114]: I1210 15:51:56.799250 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vltgn\" (UniqueName: \"kubernetes.io/projected/23b2da7b-147d-448f-b235-9120d377e780-kube-api-access-vltgn\") pod \"23b2da7b-147d-448f-b235-9120d377e780\" (UID: \"23b2da7b-147d-448f-b235-9120d377e780\") " Dec 10 15:51:56 crc kubenswrapper[5114]: I1210 15:51:56.799318 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/23b2da7b-147d-448f-b235-9120d377e780-tmp\") pod \"23b2da7b-147d-448f-b235-9120d377e780\" (UID: \"23b2da7b-147d-448f-b235-9120d377e780\") " Dec 10 15:51:56 crc kubenswrapper[5114]: I1210 15:51:56.799386 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/23b2da7b-147d-448f-b235-9120d377e780-client-ca\") pod \"23b2da7b-147d-448f-b235-9120d377e780\" (UID: \"23b2da7b-147d-448f-b235-9120d377e780\") " Dec 10 15:51:56 crc kubenswrapper[5114]: I1210 15:51:56.799437 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23b2da7b-147d-448f-b235-9120d377e780-serving-cert\") pod \"23b2da7b-147d-448f-b235-9120d377e780\" (UID: \"23b2da7b-147d-448f-b235-9120d377e780\") " Dec 10 15:51:56 crc kubenswrapper[5114]: I1210 15:51:56.799459 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23b2da7b-147d-448f-b235-9120d377e780-config\") pod \"23b2da7b-147d-448f-b235-9120d377e780\" (UID: \"23b2da7b-147d-448f-b235-9120d377e780\") " Dec 10 15:51:56 crc kubenswrapper[5114]: I1210 15:51:56.799505 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/23b2da7b-147d-448f-b235-9120d377e780-proxy-ca-bundles\") pod \"23b2da7b-147d-448f-b235-9120d377e780\" (UID: \"23b2da7b-147d-448f-b235-9120d377e780\") " Dec 10 15:51:56 crc kubenswrapper[5114]: I1210 15:51:56.799681 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="23b2da7b-147d-448f-b235-9120d377e780" containerName="controller-manager" Dec 10 15:51:56 crc kubenswrapper[5114]: I1210 15:51:56.799697 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b2da7b-147d-448f-b235-9120d377e780" containerName="controller-manager" Dec 10 15:51:56 crc kubenswrapper[5114]: I1210 15:51:56.799781 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="23b2da7b-147d-448f-b235-9120d377e780" containerName="controller-manager" Dec 10 15:51:56 crc kubenswrapper[5114]: I1210 15:51:56.800429 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23b2da7b-147d-448f-b235-9120d377e780-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "23b2da7b-147d-448f-b235-9120d377e780" (UID: "23b2da7b-147d-448f-b235-9120d377e780"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:51:56 crc kubenswrapper[5114]: I1210 15:51:56.800626 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23b2da7b-147d-448f-b235-9120d377e780-client-ca" (OuterVolumeSpecName: "client-ca") pod "23b2da7b-147d-448f-b235-9120d377e780" (UID: "23b2da7b-147d-448f-b235-9120d377e780"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:51:56 crc kubenswrapper[5114]: I1210 15:51:56.800818 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23b2da7b-147d-448f-b235-9120d377e780-tmp" (OuterVolumeSpecName: "tmp") pod "23b2da7b-147d-448f-b235-9120d377e780" (UID: "23b2da7b-147d-448f-b235-9120d377e780"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:51:56 crc kubenswrapper[5114]: I1210 15:51:56.800848 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23b2da7b-147d-448f-b235-9120d377e780-config" (OuterVolumeSpecName: "config") pod "23b2da7b-147d-448f-b235-9120d377e780" (UID: "23b2da7b-147d-448f-b235-9120d377e780"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:51:56 crc kubenswrapper[5114]: I1210 15:51:56.810733 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23b2da7b-147d-448f-b235-9120d377e780-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "23b2da7b-147d-448f-b235-9120d377e780" (UID: "23b2da7b-147d-448f-b235-9120d377e780"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:51:56 crc kubenswrapper[5114]: I1210 15:51:56.810769 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23b2da7b-147d-448f-b235-9120d377e780-kube-api-access-vltgn" (OuterVolumeSpecName: "kube-api-access-vltgn") pod "23b2da7b-147d-448f-b235-9120d377e780" (UID: "23b2da7b-147d-448f-b235-9120d377e780"). InnerVolumeSpecName "kube-api-access-vltgn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:51:56 crc kubenswrapper[5114]: I1210 15:51:56.811567 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f6665dd78-pqlm4" Dec 10 15:51:56 crc kubenswrapper[5114]: I1210 15:51:56.815207 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f6665dd78-pqlm4"] Dec 10 15:51:56 crc kubenswrapper[5114]: I1210 15:51:56.900166 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95d12908-dd4e-4fe0-be69-f7377f024168-proxy-ca-bundles\") pod \"controller-manager-7f6665dd78-pqlm4\" (UID: \"95d12908-dd4e-4fe0-be69-f7377f024168\") " pod="openshift-controller-manager/controller-manager-7f6665dd78-pqlm4" Dec 10 15:51:56 crc kubenswrapper[5114]: I1210 15:51:56.900204 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95d12908-dd4e-4fe0-be69-f7377f024168-serving-cert\") pod \"controller-manager-7f6665dd78-pqlm4\" (UID: \"95d12908-dd4e-4fe0-be69-f7377f024168\") " pod="openshift-controller-manager/controller-manager-7f6665dd78-pqlm4" Dec 10 15:51:56 crc kubenswrapper[5114]: I1210 15:51:56.900262 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95d12908-dd4e-4fe0-be69-f7377f024168-config\") pod \"controller-manager-7f6665dd78-pqlm4\" (UID: \"95d12908-dd4e-4fe0-be69-f7377f024168\") " pod="openshift-controller-manager/controller-manager-7f6665dd78-pqlm4" Dec 10 15:51:56 crc kubenswrapper[5114]: I1210 15:51:56.900371 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8dqr\" (UniqueName: \"kubernetes.io/projected/95d12908-dd4e-4fe0-be69-f7377f024168-kube-api-access-z8dqr\") pod \"controller-manager-7f6665dd78-pqlm4\" (UID: \"95d12908-dd4e-4fe0-be69-f7377f024168\") " pod="openshift-controller-manager/controller-manager-7f6665dd78-pqlm4" Dec 10 15:51:56 crc kubenswrapper[5114]: I1210 15:51:56.900449 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/95d12908-dd4e-4fe0-be69-f7377f024168-tmp\") pod \"controller-manager-7f6665dd78-pqlm4\" (UID: \"95d12908-dd4e-4fe0-be69-f7377f024168\") " pod="openshift-controller-manager/controller-manager-7f6665dd78-pqlm4" Dec 10 15:51:56 crc kubenswrapper[5114]: I1210 15:51:56.900536 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95d12908-dd4e-4fe0-be69-f7377f024168-client-ca\") pod \"controller-manager-7f6665dd78-pqlm4\" (UID: \"95d12908-dd4e-4fe0-be69-f7377f024168\") " pod="openshift-controller-manager/controller-manager-7f6665dd78-pqlm4" Dec 10 15:51:56 crc kubenswrapper[5114]: I1210 15:51:56.900734 5114 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/23b2da7b-147d-448f-b235-9120d377e780-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 10 15:51:56 crc kubenswrapper[5114]: I1210 15:51:56.900759 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vltgn\" (UniqueName: \"kubernetes.io/projected/23b2da7b-147d-448f-b235-9120d377e780-kube-api-access-vltgn\") on node \"crc\" DevicePath \"\"" Dec 10 15:51:56 crc kubenswrapper[5114]: I1210 15:51:56.900769 5114 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/23b2da7b-147d-448f-b235-9120d377e780-tmp\") on node \"crc\" DevicePath \"\"" Dec 10 15:51:56 crc kubenswrapper[5114]: I1210 15:51:56.900779 5114 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/23b2da7b-147d-448f-b235-9120d377e780-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:51:56 crc kubenswrapper[5114]: I1210 15:51:56.900787 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23b2da7b-147d-448f-b235-9120d377e780-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:51:56 crc kubenswrapper[5114]: I1210 15:51:56.900796 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23b2da7b-147d-448f-b235-9120d377e780-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:51:57 crc kubenswrapper[5114]: I1210 15:51:57.001910 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95d12908-dd4e-4fe0-be69-f7377f024168-config\") pod \"controller-manager-7f6665dd78-pqlm4\" (UID: \"95d12908-dd4e-4fe0-be69-f7377f024168\") " pod="openshift-controller-manager/controller-manager-7f6665dd78-pqlm4" Dec 10 15:51:57 crc kubenswrapper[5114]: I1210 15:51:57.001973 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z8dqr\" (UniqueName: \"kubernetes.io/projected/95d12908-dd4e-4fe0-be69-f7377f024168-kube-api-access-z8dqr\") pod \"controller-manager-7f6665dd78-pqlm4\" (UID: \"95d12908-dd4e-4fe0-be69-f7377f024168\") " pod="openshift-controller-manager/controller-manager-7f6665dd78-pqlm4" Dec 10 15:51:57 crc kubenswrapper[5114]: I1210 15:51:57.002422 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/95d12908-dd4e-4fe0-be69-f7377f024168-tmp\") pod \"controller-manager-7f6665dd78-pqlm4\" (UID: \"95d12908-dd4e-4fe0-be69-f7377f024168\") " pod="openshift-controller-manager/controller-manager-7f6665dd78-pqlm4" Dec 10 15:51:57 crc kubenswrapper[5114]: I1210 15:51:57.002485 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95d12908-dd4e-4fe0-be69-f7377f024168-client-ca\") pod \"controller-manager-7f6665dd78-pqlm4\" (UID: \"95d12908-dd4e-4fe0-be69-f7377f024168\") " pod="openshift-controller-manager/controller-manager-7f6665dd78-pqlm4" Dec 10 15:51:57 crc kubenswrapper[5114]: I1210 15:51:57.002629 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95d12908-dd4e-4fe0-be69-f7377f024168-proxy-ca-bundles\") pod \"controller-manager-7f6665dd78-pqlm4\" (UID: \"95d12908-dd4e-4fe0-be69-f7377f024168\") " pod="openshift-controller-manager/controller-manager-7f6665dd78-pqlm4" Dec 10 15:51:57 crc kubenswrapper[5114]: I1210 15:51:57.002739 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/95d12908-dd4e-4fe0-be69-f7377f024168-tmp\") pod \"controller-manager-7f6665dd78-pqlm4\" (UID: \"95d12908-dd4e-4fe0-be69-f7377f024168\") " pod="openshift-controller-manager/controller-manager-7f6665dd78-pqlm4" Dec 10 15:51:57 crc kubenswrapper[5114]: I1210 15:51:57.003225 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95d12908-dd4e-4fe0-be69-f7377f024168-config\") pod \"controller-manager-7f6665dd78-pqlm4\" (UID: \"95d12908-dd4e-4fe0-be69-f7377f024168\") " pod="openshift-controller-manager/controller-manager-7f6665dd78-pqlm4" Dec 10 15:51:57 crc kubenswrapper[5114]: I1210 15:51:57.003424 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95d12908-dd4e-4fe0-be69-f7377f024168-client-ca\") pod \"controller-manager-7f6665dd78-pqlm4\" (UID: \"95d12908-dd4e-4fe0-be69-f7377f024168\") " pod="openshift-controller-manager/controller-manager-7f6665dd78-pqlm4" Dec 10 15:51:57 crc kubenswrapper[5114]: I1210 15:51:57.003495 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95d12908-dd4e-4fe0-be69-f7377f024168-serving-cert\") pod \"controller-manager-7f6665dd78-pqlm4\" (UID: \"95d12908-dd4e-4fe0-be69-f7377f024168\") " pod="openshift-controller-manager/controller-manager-7f6665dd78-pqlm4" Dec 10 15:51:57 crc kubenswrapper[5114]: I1210 15:51:57.003783 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95d12908-dd4e-4fe0-be69-f7377f024168-proxy-ca-bundles\") pod \"controller-manager-7f6665dd78-pqlm4\" (UID: \"95d12908-dd4e-4fe0-be69-f7377f024168\") " pod="openshift-controller-manager/controller-manager-7f6665dd78-pqlm4" Dec 10 15:51:57 crc kubenswrapper[5114]: I1210 15:51:57.007340 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95d12908-dd4e-4fe0-be69-f7377f024168-serving-cert\") pod \"controller-manager-7f6665dd78-pqlm4\" (UID: \"95d12908-dd4e-4fe0-be69-f7377f024168\") " pod="openshift-controller-manager/controller-manager-7f6665dd78-pqlm4" Dec 10 15:51:57 crc kubenswrapper[5114]: I1210 15:51:57.021593 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8dqr\" (UniqueName: \"kubernetes.io/projected/95d12908-dd4e-4fe0-be69-f7377f024168-kube-api-access-z8dqr\") pod \"controller-manager-7f6665dd78-pqlm4\" (UID: \"95d12908-dd4e-4fe0-be69-f7377f024168\") " pod="openshift-controller-manager/controller-manager-7f6665dd78-pqlm4" Dec 10 15:51:57 crc kubenswrapper[5114]: I1210 15:51:57.150311 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f6665dd78-pqlm4" Dec 10 15:51:57 crc kubenswrapper[5114]: I1210 15:51:57.156640 5114 generic.go:358] "Generic (PLEG): container finished" podID="23b2da7b-147d-448f-b235-9120d377e780" containerID="09e5c75ad192a96aa521ba529d1d4e44c3b80c7c09e19de811f14e3046a4d179" exitCode=0 Dec 10 15:51:57 crc kubenswrapper[5114]: I1210 15:51:57.156765 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55f46964d4-qtf89" Dec 10 15:51:57 crc kubenswrapper[5114]: I1210 15:51:57.156804 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55f46964d4-qtf89" event={"ID":"23b2da7b-147d-448f-b235-9120d377e780","Type":"ContainerDied","Data":"09e5c75ad192a96aa521ba529d1d4e44c3b80c7c09e19de811f14e3046a4d179"} Dec 10 15:51:57 crc kubenswrapper[5114]: I1210 15:51:57.156890 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55f46964d4-qtf89" event={"ID":"23b2da7b-147d-448f-b235-9120d377e780","Type":"ContainerDied","Data":"566c06dc616318da857c9822f95dedba450898a165e216d073aea628916d40db"} Dec 10 15:51:57 crc kubenswrapper[5114]: I1210 15:51:57.156918 5114 scope.go:117] "RemoveContainer" containerID="09e5c75ad192a96aa521ba529d1d4e44c3b80c7c09e19de811f14e3046a4d179" Dec 10 15:51:57 crc kubenswrapper[5114]: I1210 15:51:57.174559 5114 scope.go:117] "RemoveContainer" containerID="09e5c75ad192a96aa521ba529d1d4e44c3b80c7c09e19de811f14e3046a4d179" Dec 10 15:51:57 crc kubenswrapper[5114]: E1210 15:51:57.174935 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09e5c75ad192a96aa521ba529d1d4e44c3b80c7c09e19de811f14e3046a4d179\": container with ID starting with 09e5c75ad192a96aa521ba529d1d4e44c3b80c7c09e19de811f14e3046a4d179 not found: ID does not exist" containerID="09e5c75ad192a96aa521ba529d1d4e44c3b80c7c09e19de811f14e3046a4d179" Dec 10 15:51:57 crc kubenswrapper[5114]: I1210 15:51:57.174962 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09e5c75ad192a96aa521ba529d1d4e44c3b80c7c09e19de811f14e3046a4d179"} err="failed to get container status \"09e5c75ad192a96aa521ba529d1d4e44c3b80c7c09e19de811f14e3046a4d179\": rpc error: code = NotFound desc = could not find container \"09e5c75ad192a96aa521ba529d1d4e44c3b80c7c09e19de811f14e3046a4d179\": container with ID starting with 09e5c75ad192a96aa521ba529d1d4e44c3b80c7c09e19de811f14e3046a4d179 not found: ID does not exist" Dec 10 15:51:57 crc kubenswrapper[5114]: I1210 15:51:57.200237 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-55f46964d4-qtf89"] Dec 10 15:51:57 crc kubenswrapper[5114]: I1210 15:51:57.207896 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-55f46964d4-qtf89"] Dec 10 15:51:57 crc kubenswrapper[5114]: I1210 15:51:57.532615 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f6665dd78-pqlm4"] Dec 10 15:51:58 crc kubenswrapper[5114]: I1210 15:51:58.165200 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f6665dd78-pqlm4" event={"ID":"95d12908-dd4e-4fe0-be69-f7377f024168","Type":"ContainerStarted","Data":"cfea0ab94799b184e468e2efc22e69bf43ae594a33251c39ff2668e9bd3363bc"} Dec 10 15:51:58 crc kubenswrapper[5114]: I1210 15:51:58.165241 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f6665dd78-pqlm4" event={"ID":"95d12908-dd4e-4fe0-be69-f7377f024168","Type":"ContainerStarted","Data":"3077a944ed03940df8967f834c341d06a8230e207894fc89cc468685076df3da"} Dec 10 15:51:58 crc kubenswrapper[5114]: I1210 15:51:58.165436 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-7f6665dd78-pqlm4" Dec 10 15:51:58 crc kubenswrapper[5114]: I1210 15:51:58.183706 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f6665dd78-pqlm4" podStartSLOduration=2.183683315 podStartE2EDuration="2.183683315s" podCreationTimestamp="2025-12-10 15:51:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:51:58.181947422 +0000 UTC m=+343.902748639" watchObservedRunningTime="2025-12-10 15:51:58.183683315 +0000 UTC m=+343.904484502" Dec 10 15:51:58 crc kubenswrapper[5114]: I1210 15:51:58.214791 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f6665dd78-pqlm4" Dec 10 15:51:58 crc kubenswrapper[5114]: I1210 15:51:58.580757 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23b2da7b-147d-448f-b235-9120d377e780" path="/var/lib/kubelet/pods/23b2da7b-147d-448f-b235-9120d377e780/volumes" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.152665 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lfhws"] Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.154764 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lfhws" podUID="bc6eba38-9248-4153-acdb-87d7acc29df0" containerName="registry-server" containerID="cri-o://0bc2e3806d3c801e7d69d340c041bbf37740b51e4ced20cd717e57cb7582f157" gracePeriod=30 Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.161228 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dvt8r"] Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.161812 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dvt8r" podUID="44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e" containerName="registry-server" containerID="cri-o://7ab6129eb467cac14838cf1b07a911fd7e1837ce57b1eae2f8c09e39f3c3132f" gracePeriod=30 Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.172841 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-wpjqd"] Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.173200 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-wpjqd" podUID="1cce5f28-0219-4980-b7bd-26cbfcbe6435" containerName="marketplace-operator" containerID="cri-o://f02be357609b629ed510c6d40545028d59a4926f9ea2fdf791a062cee4e5f274" gracePeriod=30 Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.189079 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tkn7z"] Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.189475 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tkn7z" podUID="270b074f-91f5-4ea6-b465-b0cc4a81f016" containerName="registry-server" containerID="cri-o://9d856bdfb3026bdfb1ec8a7131216cb65da0e74eae51043ba13fccd25687348f" gracePeriod=30 Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.195492 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-qmf72"] Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.203132 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g2zlq"] Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.203480 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g2zlq" podUID="3c04642b-9dc3-4509-a6d8-b03df365d743" containerName="registry-server" containerID="cri-o://e6657e38748580f439da23078e0cb9e5a2b1f5e5c156fdb520dc1c7cf3741012" gracePeriod=30 Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.203886 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-qmf72" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.205700 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-qmf72"] Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.293123 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6eacf713-415c-47f3-a958-d4325be8747d-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-qmf72\" (UID: \"6eacf713-415c-47f3-a958-d4325be8747d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qmf72" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.293159 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6eacf713-415c-47f3-a958-d4325be8747d-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-qmf72\" (UID: \"6eacf713-415c-47f3-a958-d4325be8747d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qmf72" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.293221 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6eacf713-415c-47f3-a958-d4325be8747d-tmp\") pod \"marketplace-operator-547dbd544d-qmf72\" (UID: \"6eacf713-415c-47f3-a958-d4325be8747d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qmf72" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.293242 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lsv7\" (UniqueName: \"kubernetes.io/projected/6eacf713-415c-47f3-a958-d4325be8747d-kube-api-access-8lsv7\") pod \"marketplace-operator-547dbd544d-qmf72\" (UID: \"6eacf713-415c-47f3-a958-d4325be8747d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qmf72" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.305513 5114 generic.go:358] "Generic (PLEG): container finished" podID="bc6eba38-9248-4153-acdb-87d7acc29df0" containerID="0bc2e3806d3c801e7d69d340c041bbf37740b51e4ced20cd717e57cb7582f157" exitCode=0 Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.305671 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfhws" event={"ID":"bc6eba38-9248-4153-acdb-87d7acc29df0","Type":"ContainerDied","Data":"0bc2e3806d3c801e7d69d340c041bbf37740b51e4ced20cd717e57cb7582f157"} Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.313595 5114 generic.go:358] "Generic (PLEG): container finished" podID="44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e" containerID="7ab6129eb467cac14838cf1b07a911fd7e1837ce57b1eae2f8c09e39f3c3132f" exitCode=0 Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.313735 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dvt8r" event={"ID":"44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e","Type":"ContainerDied","Data":"7ab6129eb467cac14838cf1b07a911fd7e1837ce57b1eae2f8c09e39f3c3132f"} Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.394552 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6eacf713-415c-47f3-a958-d4325be8747d-tmp\") pod \"marketplace-operator-547dbd544d-qmf72\" (UID: \"6eacf713-415c-47f3-a958-d4325be8747d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qmf72" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.394898 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8lsv7\" (UniqueName: \"kubernetes.io/projected/6eacf713-415c-47f3-a958-d4325be8747d-kube-api-access-8lsv7\") pod \"marketplace-operator-547dbd544d-qmf72\" (UID: \"6eacf713-415c-47f3-a958-d4325be8747d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qmf72" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.394951 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6eacf713-415c-47f3-a958-d4325be8747d-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-qmf72\" (UID: \"6eacf713-415c-47f3-a958-d4325be8747d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qmf72" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.394972 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6eacf713-415c-47f3-a958-d4325be8747d-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-qmf72\" (UID: \"6eacf713-415c-47f3-a958-d4325be8747d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qmf72" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.402796 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6eacf713-415c-47f3-a958-d4325be8747d-tmp\") pod \"marketplace-operator-547dbd544d-qmf72\" (UID: \"6eacf713-415c-47f3-a958-d4325be8747d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qmf72" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.403385 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6eacf713-415c-47f3-a958-d4325be8747d-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-qmf72\" (UID: \"6eacf713-415c-47f3-a958-d4325be8747d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qmf72" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.404037 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6eacf713-415c-47f3-a958-d4325be8747d-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-qmf72\" (UID: \"6eacf713-415c-47f3-a958-d4325be8747d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qmf72" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.414172 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lsv7\" (UniqueName: \"kubernetes.io/projected/6eacf713-415c-47f3-a958-d4325be8747d-kube-api-access-8lsv7\") pod \"marketplace-operator-547dbd544d-qmf72\" (UID: \"6eacf713-415c-47f3-a958-d4325be8747d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qmf72" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.537974 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-qmf72" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.613640 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lfhws" Dec 10 15:52:24 crc kubenswrapper[5114]: E1210 15:52:24.678706 5114 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7ab6129eb467cac14838cf1b07a911fd7e1837ce57b1eae2f8c09e39f3c3132f is running failed: container process not found" containerID="7ab6129eb467cac14838cf1b07a911fd7e1837ce57b1eae2f8c09e39f3c3132f" cmd=["grpc_health_probe","-addr=:50051"] Dec 10 15:52:24 crc kubenswrapper[5114]: E1210 15:52:24.678964 5114 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7ab6129eb467cac14838cf1b07a911fd7e1837ce57b1eae2f8c09e39f3c3132f is running failed: container process not found" containerID="7ab6129eb467cac14838cf1b07a911fd7e1837ce57b1eae2f8c09e39f3c3132f" cmd=["grpc_health_probe","-addr=:50051"] Dec 10 15:52:24 crc kubenswrapper[5114]: E1210 15:52:24.680253 5114 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7ab6129eb467cac14838cf1b07a911fd7e1837ce57b1eae2f8c09e39f3c3132f is running failed: container process not found" containerID="7ab6129eb467cac14838cf1b07a911fd7e1837ce57b1eae2f8c09e39f3c3132f" cmd=["grpc_health_probe","-addr=:50051"] Dec 10 15:52:24 crc kubenswrapper[5114]: E1210 15:52:24.680306 5114 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7ab6129eb467cac14838cf1b07a911fd7e1837ce57b1eae2f8c09e39f3c3132f is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-dvt8r" podUID="44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e" containerName="registry-server" probeResult="unknown" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.764408 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-wpjqd" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.818812 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc6eba38-9248-4153-acdb-87d7acc29df0-utilities\") pod \"bc6eba38-9248-4153-acdb-87d7acc29df0\" (UID: \"bc6eba38-9248-4153-acdb-87d7acc29df0\") " Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.818881 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc6eba38-9248-4153-acdb-87d7acc29df0-catalog-content\") pod \"bc6eba38-9248-4153-acdb-87d7acc29df0\" (UID: \"bc6eba38-9248-4153-acdb-87d7acc29df0\") " Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.818979 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nt2dk\" (UniqueName: \"kubernetes.io/projected/bc6eba38-9248-4153-acdb-87d7acc29df0-kube-api-access-nt2dk\") pod \"bc6eba38-9248-4153-acdb-87d7acc29df0\" (UID: \"bc6eba38-9248-4153-acdb-87d7acc29df0\") " Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.820878 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc6eba38-9248-4153-acdb-87d7acc29df0-utilities" (OuterVolumeSpecName: "utilities") pod "bc6eba38-9248-4153-acdb-87d7acc29df0" (UID: "bc6eba38-9248-4153-acdb-87d7acc29df0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.834574 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc6eba38-9248-4153-acdb-87d7acc29df0-kube-api-access-nt2dk" (OuterVolumeSpecName: "kube-api-access-nt2dk") pod "bc6eba38-9248-4153-acdb-87d7acc29df0" (UID: "bc6eba38-9248-4153-acdb-87d7acc29df0"). InnerVolumeSpecName "kube-api-access-nt2dk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.839719 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tkn7z" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.843042 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dvt8r" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.846681 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g2zlq" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.866173 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc6eba38-9248-4153-acdb-87d7acc29df0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc6eba38-9248-4153-acdb-87d7acc29df0" (UID: "bc6eba38-9248-4153-acdb-87d7acc29df0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.920964 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e-catalog-content\") pod \"44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e\" (UID: \"44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e\") " Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.921110 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e-utilities\") pod \"44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e\" (UID: \"44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e\") " Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.921178 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/270b074f-91f5-4ea6-b465-b0cc4a81f016-utilities\") pod \"270b074f-91f5-4ea6-b465-b0cc4a81f016\" (UID: \"270b074f-91f5-4ea6-b465-b0cc4a81f016\") " Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.921226 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkdtt\" (UniqueName: \"kubernetes.io/projected/44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e-kube-api-access-zkdtt\") pod \"44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e\" (UID: \"44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e\") " Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.921263 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlzmw\" (UniqueName: \"kubernetes.io/projected/270b074f-91f5-4ea6-b465-b0cc4a81f016-kube-api-access-jlzmw\") pod \"270b074f-91f5-4ea6-b465-b0cc4a81f016\" (UID: \"270b074f-91f5-4ea6-b465-b0cc4a81f016\") " Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.921337 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnsxn\" (UniqueName: \"kubernetes.io/projected/3c04642b-9dc3-4509-a6d8-b03df365d743-kube-api-access-wnsxn\") pod \"3c04642b-9dc3-4509-a6d8-b03df365d743\" (UID: \"3c04642b-9dc3-4509-a6d8-b03df365d743\") " Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.921380 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/270b074f-91f5-4ea6-b465-b0cc4a81f016-catalog-content\") pod \"270b074f-91f5-4ea6-b465-b0cc4a81f016\" (UID: \"270b074f-91f5-4ea6-b465-b0cc4a81f016\") " Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.921414 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1cce5f28-0219-4980-b7bd-26cbfcbe6435-tmp\") pod \"1cce5f28-0219-4980-b7bd-26cbfcbe6435\" (UID: \"1cce5f28-0219-4980-b7bd-26cbfcbe6435\") " Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.921470 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1cce5f28-0219-4980-b7bd-26cbfcbe6435-marketplace-operator-metrics\") pod \"1cce5f28-0219-4980-b7bd-26cbfcbe6435\" (UID: \"1cce5f28-0219-4980-b7bd-26cbfcbe6435\") " Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.921486 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1cce5f28-0219-4980-b7bd-26cbfcbe6435-marketplace-trusted-ca\") pod \"1cce5f28-0219-4980-b7bd-26cbfcbe6435\" (UID: \"1cce5f28-0219-4980-b7bd-26cbfcbe6435\") " Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.921533 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c04642b-9dc3-4509-a6d8-b03df365d743-catalog-content\") pod \"3c04642b-9dc3-4509-a6d8-b03df365d743\" (UID: \"3c04642b-9dc3-4509-a6d8-b03df365d743\") " Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.921551 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c04642b-9dc3-4509-a6d8-b03df365d743-utilities\") pod \"3c04642b-9dc3-4509-a6d8-b03df365d743\" (UID: \"3c04642b-9dc3-4509-a6d8-b03df365d743\") " Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.921572 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gws25\" (UniqueName: \"kubernetes.io/projected/1cce5f28-0219-4980-b7bd-26cbfcbe6435-kube-api-access-gws25\") pod \"1cce5f28-0219-4980-b7bd-26cbfcbe6435\" (UID: \"1cce5f28-0219-4980-b7bd-26cbfcbe6435\") " Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.921845 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc6eba38-9248-4153-acdb-87d7acc29df0-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.921858 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc6eba38-9248-4153-acdb-87d7acc29df0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.921869 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nt2dk\" (UniqueName: \"kubernetes.io/projected/bc6eba38-9248-4153-acdb-87d7acc29df0-kube-api-access-nt2dk\") on node \"crc\" DevicePath \"\"" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.922246 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/270b074f-91f5-4ea6-b465-b0cc4a81f016-utilities" (OuterVolumeSpecName: "utilities") pod "270b074f-91f5-4ea6-b465-b0cc4a81f016" (UID: "270b074f-91f5-4ea6-b465-b0cc4a81f016"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.922643 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cce5f28-0219-4980-b7bd-26cbfcbe6435-tmp" (OuterVolumeSpecName: "tmp") pod "1cce5f28-0219-4980-b7bd-26cbfcbe6435" (UID: "1cce5f28-0219-4980-b7bd-26cbfcbe6435"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.922804 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cce5f28-0219-4980-b7bd-26cbfcbe6435-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "1cce5f28-0219-4980-b7bd-26cbfcbe6435" (UID: "1cce5f28-0219-4980-b7bd-26cbfcbe6435"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.923682 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c04642b-9dc3-4509-a6d8-b03df365d743-utilities" (OuterVolumeSpecName: "utilities") pod "3c04642b-9dc3-4509-a6d8-b03df365d743" (UID: "3c04642b-9dc3-4509-a6d8-b03df365d743"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.924260 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e-utilities" (OuterVolumeSpecName: "utilities") pod "44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e" (UID: "44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.925847 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c04642b-9dc3-4509-a6d8-b03df365d743-kube-api-access-wnsxn" (OuterVolumeSpecName: "kube-api-access-wnsxn") pod "3c04642b-9dc3-4509-a6d8-b03df365d743" (UID: "3c04642b-9dc3-4509-a6d8-b03df365d743"). InnerVolumeSpecName "kube-api-access-wnsxn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.925968 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/270b074f-91f5-4ea6-b465-b0cc4a81f016-kube-api-access-jlzmw" (OuterVolumeSpecName: "kube-api-access-jlzmw") pod "270b074f-91f5-4ea6-b465-b0cc4a81f016" (UID: "270b074f-91f5-4ea6-b465-b0cc4a81f016"). InnerVolumeSpecName "kube-api-access-jlzmw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.928981 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cce5f28-0219-4980-b7bd-26cbfcbe6435-kube-api-access-gws25" (OuterVolumeSpecName: "kube-api-access-gws25") pod "1cce5f28-0219-4980-b7bd-26cbfcbe6435" (UID: "1cce5f28-0219-4980-b7bd-26cbfcbe6435"). InnerVolumeSpecName "kube-api-access-gws25". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.929247 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cce5f28-0219-4980-b7bd-26cbfcbe6435-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "1cce5f28-0219-4980-b7bd-26cbfcbe6435" (UID: "1cce5f28-0219-4980-b7bd-26cbfcbe6435"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.937467 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e-kube-api-access-zkdtt" (OuterVolumeSpecName: "kube-api-access-zkdtt") pod "44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e" (UID: "44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e"). InnerVolumeSpecName "kube-api-access-zkdtt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.939377 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/270b074f-91f5-4ea6-b465-b0cc4a81f016-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "270b074f-91f5-4ea6-b465-b0cc4a81f016" (UID: "270b074f-91f5-4ea6-b465-b0cc4a81f016"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:52:24 crc kubenswrapper[5114]: I1210 15:52:24.991507 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e" (UID: "44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.006912 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-qmf72"] Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.022631 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.022659 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/270b074f-91f5-4ea6-b465-b0cc4a81f016-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.022670 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zkdtt\" (UniqueName: \"kubernetes.io/projected/44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e-kube-api-access-zkdtt\") on node \"crc\" DevicePath \"\"" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.022679 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jlzmw\" (UniqueName: \"kubernetes.io/projected/270b074f-91f5-4ea6-b465-b0cc4a81f016-kube-api-access-jlzmw\") on node \"crc\" DevicePath \"\"" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.022686 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wnsxn\" (UniqueName: \"kubernetes.io/projected/3c04642b-9dc3-4509-a6d8-b03df365d743-kube-api-access-wnsxn\") on node \"crc\" DevicePath \"\"" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.022694 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/270b074f-91f5-4ea6-b465-b0cc4a81f016-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.022703 5114 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1cce5f28-0219-4980-b7bd-26cbfcbe6435-tmp\") on node \"crc\" DevicePath \"\"" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.022714 5114 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1cce5f28-0219-4980-b7bd-26cbfcbe6435-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.022722 5114 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1cce5f28-0219-4980-b7bd-26cbfcbe6435-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.022731 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c04642b-9dc3-4509-a6d8-b03df365d743-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.022739 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gws25\" (UniqueName: \"kubernetes.io/projected/1cce5f28-0219-4980-b7bd-26cbfcbe6435-kube-api-access-gws25\") on node \"crc\" DevicePath \"\"" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.022748 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.022997 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c04642b-9dc3-4509-a6d8-b03df365d743-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c04642b-9dc3-4509-a6d8-b03df365d743" (UID: "3c04642b-9dc3-4509-a6d8-b03df365d743"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.124543 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c04642b-9dc3-4509-a6d8-b03df365d743-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.323286 5114 generic.go:358] "Generic (PLEG): container finished" podID="3c04642b-9dc3-4509-a6d8-b03df365d743" containerID="e6657e38748580f439da23078e0cb9e5a2b1f5e5c156fdb520dc1c7cf3741012" exitCode=0 Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.323414 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2zlq" event={"ID":"3c04642b-9dc3-4509-a6d8-b03df365d743","Type":"ContainerDied","Data":"e6657e38748580f439da23078e0cb9e5a2b1f5e5c156fdb520dc1c7cf3741012"} Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.323453 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2zlq" event={"ID":"3c04642b-9dc3-4509-a6d8-b03df365d743","Type":"ContainerDied","Data":"c8cd0d95fdff99ee81a255c083c465f429e6c06c70da2d1b0bf9fcb16d67944e"} Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.323473 5114 scope.go:117] "RemoveContainer" containerID="e6657e38748580f439da23078e0cb9e5a2b1f5e5c156fdb520dc1c7cf3741012" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.323711 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g2zlq" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.337406 5114 generic.go:358] "Generic (PLEG): container finished" podID="270b074f-91f5-4ea6-b465-b0cc4a81f016" containerID="9d856bdfb3026bdfb1ec8a7131216cb65da0e74eae51043ba13fccd25687348f" exitCode=0 Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.337477 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tkn7z" event={"ID":"270b074f-91f5-4ea6-b465-b0cc4a81f016","Type":"ContainerDied","Data":"9d856bdfb3026bdfb1ec8a7131216cb65da0e74eae51043ba13fccd25687348f"} Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.337505 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tkn7z" event={"ID":"270b074f-91f5-4ea6-b465-b0cc4a81f016","Type":"ContainerDied","Data":"1760c5e1fc86138734cdd0f8e12dd02fd244e312b26e79b7553f23e6d27d4d26"} Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.337590 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tkn7z" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.352431 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dvt8r" event={"ID":"44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e","Type":"ContainerDied","Data":"88ee232a4e2c6caf16fb1a2ecfd2b8b06a22f0cf753d5ba9e45cf1b57461d0a7"} Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.352444 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dvt8r" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.354123 5114 generic.go:358] "Generic (PLEG): container finished" podID="1cce5f28-0219-4980-b7bd-26cbfcbe6435" containerID="f02be357609b629ed510c6d40545028d59a4926f9ea2fdf791a062cee4e5f274" exitCode=0 Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.354204 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-wpjqd" event={"ID":"1cce5f28-0219-4980-b7bd-26cbfcbe6435","Type":"ContainerDied","Data":"f02be357609b629ed510c6d40545028d59a4926f9ea2fdf791a062cee4e5f274"} Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.354224 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-wpjqd" event={"ID":"1cce5f28-0219-4980-b7bd-26cbfcbe6435","Type":"ContainerDied","Data":"229b4a1ee8d7dec1bbf9aece8b2b7f657274cc13ae39af55fff89463cce2d549"} Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.354298 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-wpjqd" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.361265 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-qmf72" event={"ID":"6eacf713-415c-47f3-a958-d4325be8747d","Type":"ContainerStarted","Data":"b6b2ba9264ad313b77adeece3c99566ed0594d4fe7e1602a6478a793b373fd29"} Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.364946 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfhws" event={"ID":"bc6eba38-9248-4153-acdb-87d7acc29df0","Type":"ContainerDied","Data":"a81b9af3f58df1d5eac84883fe4f247acdcb4959fab2b89e0bf720d5b42caf2d"} Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.365044 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lfhws" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.464847 5114 scope.go:117] "RemoveContainer" containerID="31691d0757be053df02438ff181c46a731060f5c946e1d3dfc8c142d0b202e26" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.521939 5114 scope.go:117] "RemoveContainer" containerID="342b102ce277959b3fdbb2fa69e6c49a99fc26760492e91630ef0f636ac97563" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.558732 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tkn7z"] Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.561439 5114 scope.go:117] "RemoveContainer" containerID="e6657e38748580f439da23078e0cb9e5a2b1f5e5c156fdb520dc1c7cf3741012" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.572890 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tkn7z"] Dec 10 15:52:25 crc kubenswrapper[5114]: E1210 15:52:25.575392 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6657e38748580f439da23078e0cb9e5a2b1f5e5c156fdb520dc1c7cf3741012\": container with ID starting with e6657e38748580f439da23078e0cb9e5a2b1f5e5c156fdb520dc1c7cf3741012 not found: ID does not exist" containerID="e6657e38748580f439da23078e0cb9e5a2b1f5e5c156fdb520dc1c7cf3741012" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.575462 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6657e38748580f439da23078e0cb9e5a2b1f5e5c156fdb520dc1c7cf3741012"} err="failed to get container status \"e6657e38748580f439da23078e0cb9e5a2b1f5e5c156fdb520dc1c7cf3741012\": rpc error: code = NotFound desc = could not find container \"e6657e38748580f439da23078e0cb9e5a2b1f5e5c156fdb520dc1c7cf3741012\": container with ID starting with e6657e38748580f439da23078e0cb9e5a2b1f5e5c156fdb520dc1c7cf3741012 not found: ID does not exist" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.575659 5114 scope.go:117] "RemoveContainer" containerID="31691d0757be053df02438ff181c46a731060f5c946e1d3dfc8c142d0b202e26" Dec 10 15:52:25 crc kubenswrapper[5114]: E1210 15:52:25.576206 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31691d0757be053df02438ff181c46a731060f5c946e1d3dfc8c142d0b202e26\": container with ID starting with 31691d0757be053df02438ff181c46a731060f5c946e1d3dfc8c142d0b202e26 not found: ID does not exist" containerID="31691d0757be053df02438ff181c46a731060f5c946e1d3dfc8c142d0b202e26" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.576234 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31691d0757be053df02438ff181c46a731060f5c946e1d3dfc8c142d0b202e26"} err="failed to get container status \"31691d0757be053df02438ff181c46a731060f5c946e1d3dfc8c142d0b202e26\": rpc error: code = NotFound desc = could not find container \"31691d0757be053df02438ff181c46a731060f5c946e1d3dfc8c142d0b202e26\": container with ID starting with 31691d0757be053df02438ff181c46a731060f5c946e1d3dfc8c142d0b202e26 not found: ID does not exist" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.576249 5114 scope.go:117] "RemoveContainer" containerID="342b102ce277959b3fdbb2fa69e6c49a99fc26760492e91630ef0f636ac97563" Dec 10 15:52:25 crc kubenswrapper[5114]: E1210 15:52:25.576607 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"342b102ce277959b3fdbb2fa69e6c49a99fc26760492e91630ef0f636ac97563\": container with ID starting with 342b102ce277959b3fdbb2fa69e6c49a99fc26760492e91630ef0f636ac97563 not found: ID does not exist" containerID="342b102ce277959b3fdbb2fa69e6c49a99fc26760492e91630ef0f636ac97563" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.576757 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"342b102ce277959b3fdbb2fa69e6c49a99fc26760492e91630ef0f636ac97563"} err="failed to get container status \"342b102ce277959b3fdbb2fa69e6c49a99fc26760492e91630ef0f636ac97563\": rpc error: code = NotFound desc = could not find container \"342b102ce277959b3fdbb2fa69e6c49a99fc26760492e91630ef0f636ac97563\": container with ID starting with 342b102ce277959b3fdbb2fa69e6c49a99fc26760492e91630ef0f636ac97563 not found: ID does not exist" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.576877 5114 scope.go:117] "RemoveContainer" containerID="9d856bdfb3026bdfb1ec8a7131216cb65da0e74eae51043ba13fccd25687348f" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.586866 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-wpjqd"] Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.597401 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-wpjqd"] Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.602983 5114 scope.go:117] "RemoveContainer" containerID="71cea54446a64c5cd4374c6dabf19bcf52529461307655d9e6ebc99c49754994" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.608651 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dvt8r"] Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.615974 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dvt8r"] Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.622546 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lfhws"] Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.624110 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lfhws"] Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.626883 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g2zlq"] Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.627093 5114 scope.go:117] "RemoveContainer" containerID="cf79a6e133b6ee0cac2f597eebef4fd8d870abdcec2209f79b5867a88ddb3c3f" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.629606 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g2zlq"] Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.648170 5114 scope.go:117] "RemoveContainer" containerID="9d856bdfb3026bdfb1ec8a7131216cb65da0e74eae51043ba13fccd25687348f" Dec 10 15:52:25 crc kubenswrapper[5114]: E1210 15:52:25.648967 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d856bdfb3026bdfb1ec8a7131216cb65da0e74eae51043ba13fccd25687348f\": container with ID starting with 9d856bdfb3026bdfb1ec8a7131216cb65da0e74eae51043ba13fccd25687348f not found: ID does not exist" containerID="9d856bdfb3026bdfb1ec8a7131216cb65da0e74eae51043ba13fccd25687348f" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.649059 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d856bdfb3026bdfb1ec8a7131216cb65da0e74eae51043ba13fccd25687348f"} err="failed to get container status \"9d856bdfb3026bdfb1ec8a7131216cb65da0e74eae51043ba13fccd25687348f\": rpc error: code = NotFound desc = could not find container \"9d856bdfb3026bdfb1ec8a7131216cb65da0e74eae51043ba13fccd25687348f\": container with ID starting with 9d856bdfb3026bdfb1ec8a7131216cb65da0e74eae51043ba13fccd25687348f not found: ID does not exist" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.649096 5114 scope.go:117] "RemoveContainer" containerID="71cea54446a64c5cd4374c6dabf19bcf52529461307655d9e6ebc99c49754994" Dec 10 15:52:25 crc kubenswrapper[5114]: E1210 15:52:25.649554 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71cea54446a64c5cd4374c6dabf19bcf52529461307655d9e6ebc99c49754994\": container with ID starting with 71cea54446a64c5cd4374c6dabf19bcf52529461307655d9e6ebc99c49754994 not found: ID does not exist" containerID="71cea54446a64c5cd4374c6dabf19bcf52529461307655d9e6ebc99c49754994" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.649587 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71cea54446a64c5cd4374c6dabf19bcf52529461307655d9e6ebc99c49754994"} err="failed to get container status \"71cea54446a64c5cd4374c6dabf19bcf52529461307655d9e6ebc99c49754994\": rpc error: code = NotFound desc = could not find container \"71cea54446a64c5cd4374c6dabf19bcf52529461307655d9e6ebc99c49754994\": container with ID starting with 71cea54446a64c5cd4374c6dabf19bcf52529461307655d9e6ebc99c49754994 not found: ID does not exist" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.649613 5114 scope.go:117] "RemoveContainer" containerID="cf79a6e133b6ee0cac2f597eebef4fd8d870abdcec2209f79b5867a88ddb3c3f" Dec 10 15:52:25 crc kubenswrapper[5114]: E1210 15:52:25.649875 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf79a6e133b6ee0cac2f597eebef4fd8d870abdcec2209f79b5867a88ddb3c3f\": container with ID starting with cf79a6e133b6ee0cac2f597eebef4fd8d870abdcec2209f79b5867a88ddb3c3f not found: ID does not exist" containerID="cf79a6e133b6ee0cac2f597eebef4fd8d870abdcec2209f79b5867a88ddb3c3f" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.649900 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf79a6e133b6ee0cac2f597eebef4fd8d870abdcec2209f79b5867a88ddb3c3f"} err="failed to get container status \"cf79a6e133b6ee0cac2f597eebef4fd8d870abdcec2209f79b5867a88ddb3c3f\": rpc error: code = NotFound desc = could not find container \"cf79a6e133b6ee0cac2f597eebef4fd8d870abdcec2209f79b5867a88ddb3c3f\": container with ID starting with cf79a6e133b6ee0cac2f597eebef4fd8d870abdcec2209f79b5867a88ddb3c3f not found: ID does not exist" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.649914 5114 scope.go:117] "RemoveContainer" containerID="7ab6129eb467cac14838cf1b07a911fd7e1837ce57b1eae2f8c09e39f3c3132f" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.662747 5114 scope.go:117] "RemoveContainer" containerID="69bdf76ff651b4876de93bc9953cd33a6b1e092e9075806db91864059fbed73c" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.677600 5114 scope.go:117] "RemoveContainer" containerID="620a8eede76fa27029317a1c42e6ea8bc13d5b1dccd01add92058829bd04f03a" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.690178 5114 scope.go:117] "RemoveContainer" containerID="f02be357609b629ed510c6d40545028d59a4926f9ea2fdf791a062cee4e5f274" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.702818 5114 scope.go:117] "RemoveContainer" containerID="d2658ab04cd150979e9d0d56fde13192d480bf4cd98fd857e4bd00bedb87a7b6" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.718425 5114 scope.go:117] "RemoveContainer" containerID="f02be357609b629ed510c6d40545028d59a4926f9ea2fdf791a062cee4e5f274" Dec 10 15:52:25 crc kubenswrapper[5114]: E1210 15:52:25.720842 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f02be357609b629ed510c6d40545028d59a4926f9ea2fdf791a062cee4e5f274\": container with ID starting with f02be357609b629ed510c6d40545028d59a4926f9ea2fdf791a062cee4e5f274 not found: ID does not exist" containerID="f02be357609b629ed510c6d40545028d59a4926f9ea2fdf791a062cee4e5f274" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.720885 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f02be357609b629ed510c6d40545028d59a4926f9ea2fdf791a062cee4e5f274"} err="failed to get container status \"f02be357609b629ed510c6d40545028d59a4926f9ea2fdf791a062cee4e5f274\": rpc error: code = NotFound desc = could not find container \"f02be357609b629ed510c6d40545028d59a4926f9ea2fdf791a062cee4e5f274\": container with ID starting with f02be357609b629ed510c6d40545028d59a4926f9ea2fdf791a062cee4e5f274 not found: ID does not exist" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.720911 5114 scope.go:117] "RemoveContainer" containerID="d2658ab04cd150979e9d0d56fde13192d480bf4cd98fd857e4bd00bedb87a7b6" Dec 10 15:52:25 crc kubenswrapper[5114]: E1210 15:52:25.721316 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2658ab04cd150979e9d0d56fde13192d480bf4cd98fd857e4bd00bedb87a7b6\": container with ID starting with d2658ab04cd150979e9d0d56fde13192d480bf4cd98fd857e4bd00bedb87a7b6 not found: ID does not exist" containerID="d2658ab04cd150979e9d0d56fde13192d480bf4cd98fd857e4bd00bedb87a7b6" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.721386 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2658ab04cd150979e9d0d56fde13192d480bf4cd98fd857e4bd00bedb87a7b6"} err="failed to get container status \"d2658ab04cd150979e9d0d56fde13192d480bf4cd98fd857e4bd00bedb87a7b6\": rpc error: code = NotFound desc = could not find container \"d2658ab04cd150979e9d0d56fde13192d480bf4cd98fd857e4bd00bedb87a7b6\": container with ID starting with d2658ab04cd150979e9d0d56fde13192d480bf4cd98fd857e4bd00bedb87a7b6 not found: ID does not exist" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.721462 5114 scope.go:117] "RemoveContainer" containerID="0bc2e3806d3c801e7d69d340c041bbf37740b51e4ced20cd717e57cb7582f157" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.735759 5114 scope.go:117] "RemoveContainer" containerID="b4df279fd2545c1c47a574318f3545cf9c9a36241cdedcd8be16545ed9e273ed" Dec 10 15:52:25 crc kubenswrapper[5114]: I1210 15:52:25.751363 5114 scope.go:117] "RemoveContainer" containerID="b666afe5c1f16390566efd7cf85aeefb2480355c505804c7413f545a7ef08455" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.360911 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-smk77"] Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.361781 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3c04642b-9dc3-4509-a6d8-b03df365d743" containerName="extract-utilities" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.361808 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c04642b-9dc3-4509-a6d8-b03df365d743" containerName="extract-utilities" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.361831 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="270b074f-91f5-4ea6-b465-b0cc4a81f016" containerName="extract-utilities" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.361841 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="270b074f-91f5-4ea6-b465-b0cc4a81f016" containerName="extract-utilities" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.361859 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bc6eba38-9248-4153-acdb-87d7acc29df0" containerName="extract-content" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.361870 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc6eba38-9248-4153-acdb-87d7acc29df0" containerName="extract-content" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.361885 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e" containerName="extract-content" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.361895 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e" containerName="extract-content" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.361910 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3c04642b-9dc3-4509-a6d8-b03df365d743" containerName="registry-server" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.361920 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c04642b-9dc3-4509-a6d8-b03df365d743" containerName="registry-server" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.361936 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3c04642b-9dc3-4509-a6d8-b03df365d743" containerName="extract-content" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.361946 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c04642b-9dc3-4509-a6d8-b03df365d743" containerName="extract-content" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.361965 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1cce5f28-0219-4980-b7bd-26cbfcbe6435" containerName="marketplace-operator" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.361976 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cce5f28-0219-4980-b7bd-26cbfcbe6435" containerName="marketplace-operator" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.361988 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e" containerName="registry-server" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.361999 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e" containerName="registry-server" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.362009 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1cce5f28-0219-4980-b7bd-26cbfcbe6435" containerName="marketplace-operator" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.362020 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cce5f28-0219-4980-b7bd-26cbfcbe6435" containerName="marketplace-operator" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.362039 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="270b074f-91f5-4ea6-b465-b0cc4a81f016" containerName="extract-content" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.362050 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="270b074f-91f5-4ea6-b465-b0cc4a81f016" containerName="extract-content" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.362070 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bc6eba38-9248-4153-acdb-87d7acc29df0" containerName="registry-server" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.363353 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc6eba38-9248-4153-acdb-87d7acc29df0" containerName="registry-server" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.363377 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bc6eba38-9248-4153-acdb-87d7acc29df0" containerName="extract-utilities" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.363387 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc6eba38-9248-4153-acdb-87d7acc29df0" containerName="extract-utilities" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.363407 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="270b074f-91f5-4ea6-b465-b0cc4a81f016" containerName="registry-server" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.363414 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="270b074f-91f5-4ea6-b465-b0cc4a81f016" containerName="registry-server" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.363428 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e" containerName="extract-utilities" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.363436 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e" containerName="extract-utilities" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.363555 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="1cce5f28-0219-4980-b7bd-26cbfcbe6435" containerName="marketplace-operator" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.363567 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e" containerName="registry-server" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.363580 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="bc6eba38-9248-4153-acdb-87d7acc29df0" containerName="registry-server" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.363590 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="270b074f-91f5-4ea6-b465-b0cc4a81f016" containerName="registry-server" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.363602 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="3c04642b-9dc3-4509-a6d8-b03df365d743" containerName="registry-server" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.363946 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="1cce5f28-0219-4980-b7bd-26cbfcbe6435" containerName="marketplace-operator" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.368723 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-smk77" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.371566 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-smk77"] Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.371761 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.379946 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-qmf72" event={"ID":"6eacf713-415c-47f3-a958-d4325be8747d","Type":"ContainerStarted","Data":"a774ffc4a4fe6223adbc076b85a6fa01a84edc45766d66358858eb26a8f2a54b"} Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.380204 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-qmf72" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.383567 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-qmf72" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.409476 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-qmf72" podStartSLOduration=2.408903037 podStartE2EDuration="2.408903037s" podCreationTimestamp="2025-12-10 15:52:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:52:26.403350757 +0000 UTC m=+372.124151944" watchObservedRunningTime="2025-12-10 15:52:26.408903037 +0000 UTC m=+372.129704214" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.448709 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3326c30e-e68b-4b7d-975c-bf6bdb74b04b-catalog-content\") pod \"certified-operators-smk77\" (UID: \"3326c30e-e68b-4b7d-975c-bf6bdb74b04b\") " pod="openshift-marketplace/certified-operators-smk77" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.448767 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3326c30e-e68b-4b7d-975c-bf6bdb74b04b-utilities\") pod \"certified-operators-smk77\" (UID: \"3326c30e-e68b-4b7d-975c-bf6bdb74b04b\") " pod="openshift-marketplace/certified-operators-smk77" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.448799 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n82ts\" (UniqueName: \"kubernetes.io/projected/3326c30e-e68b-4b7d-975c-bf6bdb74b04b-kube-api-access-n82ts\") pod \"certified-operators-smk77\" (UID: \"3326c30e-e68b-4b7d-975c-bf6bdb74b04b\") " pod="openshift-marketplace/certified-operators-smk77" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.551523 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3326c30e-e68b-4b7d-975c-bf6bdb74b04b-catalog-content\") pod \"certified-operators-smk77\" (UID: \"3326c30e-e68b-4b7d-975c-bf6bdb74b04b\") " pod="openshift-marketplace/certified-operators-smk77" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.551938 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3326c30e-e68b-4b7d-975c-bf6bdb74b04b-utilities\") pod \"certified-operators-smk77\" (UID: \"3326c30e-e68b-4b7d-975c-bf6bdb74b04b\") " pod="openshift-marketplace/certified-operators-smk77" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.552097 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n82ts\" (UniqueName: \"kubernetes.io/projected/3326c30e-e68b-4b7d-975c-bf6bdb74b04b-kube-api-access-n82ts\") pod \"certified-operators-smk77\" (UID: \"3326c30e-e68b-4b7d-975c-bf6bdb74b04b\") " pod="openshift-marketplace/certified-operators-smk77" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.552789 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3326c30e-e68b-4b7d-975c-bf6bdb74b04b-catalog-content\") pod \"certified-operators-smk77\" (UID: \"3326c30e-e68b-4b7d-975c-bf6bdb74b04b\") " pod="openshift-marketplace/certified-operators-smk77" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.553008 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3326c30e-e68b-4b7d-975c-bf6bdb74b04b-utilities\") pod \"certified-operators-smk77\" (UID: \"3326c30e-e68b-4b7d-975c-bf6bdb74b04b\") " pod="openshift-marketplace/certified-operators-smk77" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.559247 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mcgrk"] Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.566439 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mcgrk" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.571639 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n82ts\" (UniqueName: \"kubernetes.io/projected/3326c30e-e68b-4b7d-975c-bf6bdb74b04b-kube-api-access-n82ts\") pod \"certified-operators-smk77\" (UID: \"3326c30e-e68b-4b7d-975c-bf6bdb74b04b\") " pod="openshift-marketplace/certified-operators-smk77" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.573577 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.596090 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cce5f28-0219-4980-b7bd-26cbfcbe6435" path="/var/lib/kubelet/pods/1cce5f28-0219-4980-b7bd-26cbfcbe6435/volumes" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.596637 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="270b074f-91f5-4ea6-b465-b0cc4a81f016" path="/var/lib/kubelet/pods/270b074f-91f5-4ea6-b465-b0cc4a81f016/volumes" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.597177 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c04642b-9dc3-4509-a6d8-b03df365d743" path="/var/lib/kubelet/pods/3c04642b-9dc3-4509-a6d8-b03df365d743/volumes" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.598150 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e" path="/var/lib/kubelet/pods/44e9a0a9-5c2a-43a1-8a30-e02dc9cef24e/volumes" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.598751 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc6eba38-9248-4153-acdb-87d7acc29df0" path="/var/lib/kubelet/pods/bc6eba38-9248-4153-acdb-87d7acc29df0/volumes" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.599660 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mcgrk"] Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.652878 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5b528d4-737f-4220-93c1-835d19f6c10d-catalog-content\") pod \"community-operators-mcgrk\" (UID: \"f5b528d4-737f-4220-93c1-835d19f6c10d\") " pod="openshift-marketplace/community-operators-mcgrk" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.653210 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2sf6\" (UniqueName: \"kubernetes.io/projected/f5b528d4-737f-4220-93c1-835d19f6c10d-kube-api-access-m2sf6\") pod \"community-operators-mcgrk\" (UID: \"f5b528d4-737f-4220-93c1-835d19f6c10d\") " pod="openshift-marketplace/community-operators-mcgrk" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.653392 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5b528d4-737f-4220-93c1-835d19f6c10d-utilities\") pod \"community-operators-mcgrk\" (UID: \"f5b528d4-737f-4220-93c1-835d19f6c10d\") " pod="openshift-marketplace/community-operators-mcgrk" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.694393 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-smk77" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.754521 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m2sf6\" (UniqueName: \"kubernetes.io/projected/f5b528d4-737f-4220-93c1-835d19f6c10d-kube-api-access-m2sf6\") pod \"community-operators-mcgrk\" (UID: \"f5b528d4-737f-4220-93c1-835d19f6c10d\") " pod="openshift-marketplace/community-operators-mcgrk" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.754615 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5b528d4-737f-4220-93c1-835d19f6c10d-utilities\") pod \"community-operators-mcgrk\" (UID: \"f5b528d4-737f-4220-93c1-835d19f6c10d\") " pod="openshift-marketplace/community-operators-mcgrk" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.754680 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5b528d4-737f-4220-93c1-835d19f6c10d-catalog-content\") pod \"community-operators-mcgrk\" (UID: \"f5b528d4-737f-4220-93c1-835d19f6c10d\") " pod="openshift-marketplace/community-operators-mcgrk" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.755219 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5b528d4-737f-4220-93c1-835d19f6c10d-utilities\") pod \"community-operators-mcgrk\" (UID: \"f5b528d4-737f-4220-93c1-835d19f6c10d\") " pod="openshift-marketplace/community-operators-mcgrk" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.756388 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5b528d4-737f-4220-93c1-835d19f6c10d-catalog-content\") pod \"community-operators-mcgrk\" (UID: \"f5b528d4-737f-4220-93c1-835d19f6c10d\") " pod="openshift-marketplace/community-operators-mcgrk" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.775649 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2sf6\" (UniqueName: \"kubernetes.io/projected/f5b528d4-737f-4220-93c1-835d19f6c10d-kube-api-access-m2sf6\") pod \"community-operators-mcgrk\" (UID: \"f5b528d4-737f-4220-93c1-835d19f6c10d\") " pod="openshift-marketplace/community-operators-mcgrk" Dec 10 15:52:26 crc kubenswrapper[5114]: I1210 15:52:26.923037 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mcgrk" Dec 10 15:52:27 crc kubenswrapper[5114]: I1210 15:52:27.074850 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-smk77"] Dec 10 15:52:27 crc kubenswrapper[5114]: W1210 15:52:27.081051 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3326c30e_e68b_4b7d_975c_bf6bdb74b04b.slice/crio-a56cd4af40d33da40082585f23b65a54aa1dcf545262c17ae1c408d8a3a7e1be WatchSource:0}: Error finding container a56cd4af40d33da40082585f23b65a54aa1dcf545262c17ae1c408d8a3a7e1be: Status 404 returned error can't find the container with id a56cd4af40d33da40082585f23b65a54aa1dcf545262c17ae1c408d8a3a7e1be Dec 10 15:52:27 crc kubenswrapper[5114]: I1210 15:52:27.303015 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mcgrk"] Dec 10 15:52:27 crc kubenswrapper[5114]: I1210 15:52:27.391005 5114 generic.go:358] "Generic (PLEG): container finished" podID="3326c30e-e68b-4b7d-975c-bf6bdb74b04b" containerID="3998ccd7e266e6162deba27f009d537b0d886415c83eda1493b8085a2e214e16" exitCode=0 Dec 10 15:52:27 crc kubenswrapper[5114]: I1210 15:52:27.391387 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-smk77" event={"ID":"3326c30e-e68b-4b7d-975c-bf6bdb74b04b","Type":"ContainerDied","Data":"3998ccd7e266e6162deba27f009d537b0d886415c83eda1493b8085a2e214e16"} Dec 10 15:52:27 crc kubenswrapper[5114]: I1210 15:52:27.391421 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-smk77" event={"ID":"3326c30e-e68b-4b7d-975c-bf6bdb74b04b","Type":"ContainerStarted","Data":"a56cd4af40d33da40082585f23b65a54aa1dcf545262c17ae1c408d8a3a7e1be"} Dec 10 15:52:27 crc kubenswrapper[5114]: I1210 15:52:27.393161 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mcgrk" event={"ID":"f5b528d4-737f-4220-93c1-835d19f6c10d","Type":"ContainerStarted","Data":"5749d17c8fa9be1a0afcf395eb248c9e9b778ad9df89e0847b2a0c0a76bd97ec"} Dec 10 15:52:28 crc kubenswrapper[5114]: I1210 15:52:28.401803 5114 generic.go:358] "Generic (PLEG): container finished" podID="3326c30e-e68b-4b7d-975c-bf6bdb74b04b" containerID="596280bc32b59e963199c3c7d2eb76bef52a18f38fb9307d6c40d77088ec0d63" exitCode=0 Dec 10 15:52:28 crc kubenswrapper[5114]: I1210 15:52:28.401862 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-smk77" event={"ID":"3326c30e-e68b-4b7d-975c-bf6bdb74b04b","Type":"ContainerDied","Data":"596280bc32b59e963199c3c7d2eb76bef52a18f38fb9307d6c40d77088ec0d63"} Dec 10 15:52:28 crc kubenswrapper[5114]: I1210 15:52:28.406437 5114 generic.go:358] "Generic (PLEG): container finished" podID="f5b528d4-737f-4220-93c1-835d19f6c10d" containerID="a1f45b4dad66c1940aa6a43c7ea89255e0486b2758115951622e81947c329e8a" exitCode=0 Dec 10 15:52:28 crc kubenswrapper[5114]: I1210 15:52:28.407987 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mcgrk" event={"ID":"f5b528d4-737f-4220-93c1-835d19f6c10d","Type":"ContainerDied","Data":"a1f45b4dad66c1940aa6a43c7ea89255e0486b2758115951622e81947c329e8a"} Dec 10 15:52:28 crc kubenswrapper[5114]: I1210 15:52:28.760878 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dc4rb"] Dec 10 15:52:28 crc kubenswrapper[5114]: I1210 15:52:28.769141 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dc4rb" Dec 10 15:52:28 crc kubenswrapper[5114]: I1210 15:52:28.772789 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 10 15:52:28 crc kubenswrapper[5114]: I1210 15:52:28.773037 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dc4rb"] Dec 10 15:52:28 crc kubenswrapper[5114]: I1210 15:52:28.785651 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dc01581-1f70-490d-9fb7-d68483ddbe27-catalog-content\") pod \"redhat-marketplace-dc4rb\" (UID: \"2dc01581-1f70-490d-9fb7-d68483ddbe27\") " pod="openshift-marketplace/redhat-marketplace-dc4rb" Dec 10 15:52:28 crc kubenswrapper[5114]: I1210 15:52:28.785715 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dc01581-1f70-490d-9fb7-d68483ddbe27-utilities\") pod \"redhat-marketplace-dc4rb\" (UID: \"2dc01581-1f70-490d-9fb7-d68483ddbe27\") " pod="openshift-marketplace/redhat-marketplace-dc4rb" Dec 10 15:52:28 crc kubenswrapper[5114]: I1210 15:52:28.785741 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrwjj\" (UniqueName: \"kubernetes.io/projected/2dc01581-1f70-490d-9fb7-d68483ddbe27-kube-api-access-lrwjj\") pod \"redhat-marketplace-dc4rb\" (UID: \"2dc01581-1f70-490d-9fb7-d68483ddbe27\") " pod="openshift-marketplace/redhat-marketplace-dc4rb" Dec 10 15:52:28 crc kubenswrapper[5114]: I1210 15:52:28.886927 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dc01581-1f70-490d-9fb7-d68483ddbe27-catalog-content\") pod \"redhat-marketplace-dc4rb\" (UID: \"2dc01581-1f70-490d-9fb7-d68483ddbe27\") " pod="openshift-marketplace/redhat-marketplace-dc4rb" Dec 10 15:52:28 crc kubenswrapper[5114]: I1210 15:52:28.887203 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dc01581-1f70-490d-9fb7-d68483ddbe27-utilities\") pod \"redhat-marketplace-dc4rb\" (UID: \"2dc01581-1f70-490d-9fb7-d68483ddbe27\") " pod="openshift-marketplace/redhat-marketplace-dc4rb" Dec 10 15:52:28 crc kubenswrapper[5114]: I1210 15:52:28.887304 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lrwjj\" (UniqueName: \"kubernetes.io/projected/2dc01581-1f70-490d-9fb7-d68483ddbe27-kube-api-access-lrwjj\") pod \"redhat-marketplace-dc4rb\" (UID: \"2dc01581-1f70-490d-9fb7-d68483ddbe27\") " pod="openshift-marketplace/redhat-marketplace-dc4rb" Dec 10 15:52:28 crc kubenswrapper[5114]: I1210 15:52:28.887709 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dc01581-1f70-490d-9fb7-d68483ddbe27-catalog-content\") pod \"redhat-marketplace-dc4rb\" (UID: \"2dc01581-1f70-490d-9fb7-d68483ddbe27\") " pod="openshift-marketplace/redhat-marketplace-dc4rb" Dec 10 15:52:28 crc kubenswrapper[5114]: I1210 15:52:28.887763 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dc01581-1f70-490d-9fb7-d68483ddbe27-utilities\") pod \"redhat-marketplace-dc4rb\" (UID: \"2dc01581-1f70-490d-9fb7-d68483ddbe27\") " pod="openshift-marketplace/redhat-marketplace-dc4rb" Dec 10 15:52:28 crc kubenswrapper[5114]: I1210 15:52:28.907989 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrwjj\" (UniqueName: \"kubernetes.io/projected/2dc01581-1f70-490d-9fb7-d68483ddbe27-kube-api-access-lrwjj\") pod \"redhat-marketplace-dc4rb\" (UID: \"2dc01581-1f70-490d-9fb7-d68483ddbe27\") " pod="openshift-marketplace/redhat-marketplace-dc4rb" Dec 10 15:52:28 crc kubenswrapper[5114]: I1210 15:52:28.961618 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j5mlp"] Dec 10 15:52:28 crc kubenswrapper[5114]: I1210 15:52:28.965960 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j5mlp" Dec 10 15:52:28 crc kubenswrapper[5114]: I1210 15:52:28.974557 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 10 15:52:28 crc kubenswrapper[5114]: I1210 15:52:28.976817 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j5mlp"] Dec 10 15:52:28 crc kubenswrapper[5114]: I1210 15:52:28.989299 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6def4e7-3ec6-41f5-9ff4-1a476e10191f-catalog-content\") pod \"redhat-operators-j5mlp\" (UID: \"a6def4e7-3ec6-41f5-9ff4-1a476e10191f\") " pod="openshift-marketplace/redhat-operators-j5mlp" Dec 10 15:52:28 crc kubenswrapper[5114]: I1210 15:52:28.989359 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w2ql\" (UniqueName: \"kubernetes.io/projected/a6def4e7-3ec6-41f5-9ff4-1a476e10191f-kube-api-access-4w2ql\") pod \"redhat-operators-j5mlp\" (UID: \"a6def4e7-3ec6-41f5-9ff4-1a476e10191f\") " pod="openshift-marketplace/redhat-operators-j5mlp" Dec 10 15:52:28 crc kubenswrapper[5114]: I1210 15:52:28.989411 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6def4e7-3ec6-41f5-9ff4-1a476e10191f-utilities\") pod \"redhat-operators-j5mlp\" (UID: \"a6def4e7-3ec6-41f5-9ff4-1a476e10191f\") " pod="openshift-marketplace/redhat-operators-j5mlp" Dec 10 15:52:29 crc kubenswrapper[5114]: I1210 15:52:29.091310 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6def4e7-3ec6-41f5-9ff4-1a476e10191f-catalog-content\") pod \"redhat-operators-j5mlp\" (UID: \"a6def4e7-3ec6-41f5-9ff4-1a476e10191f\") " pod="openshift-marketplace/redhat-operators-j5mlp" Dec 10 15:52:29 crc kubenswrapper[5114]: I1210 15:52:29.091361 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4w2ql\" (UniqueName: \"kubernetes.io/projected/a6def4e7-3ec6-41f5-9ff4-1a476e10191f-kube-api-access-4w2ql\") pod \"redhat-operators-j5mlp\" (UID: \"a6def4e7-3ec6-41f5-9ff4-1a476e10191f\") " pod="openshift-marketplace/redhat-operators-j5mlp" Dec 10 15:52:29 crc kubenswrapper[5114]: I1210 15:52:29.091398 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6def4e7-3ec6-41f5-9ff4-1a476e10191f-utilities\") pod \"redhat-operators-j5mlp\" (UID: \"a6def4e7-3ec6-41f5-9ff4-1a476e10191f\") " pod="openshift-marketplace/redhat-operators-j5mlp" Dec 10 15:52:29 crc kubenswrapper[5114]: I1210 15:52:29.091771 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6def4e7-3ec6-41f5-9ff4-1a476e10191f-catalog-content\") pod \"redhat-operators-j5mlp\" (UID: \"a6def4e7-3ec6-41f5-9ff4-1a476e10191f\") " pod="openshift-marketplace/redhat-operators-j5mlp" Dec 10 15:52:29 crc kubenswrapper[5114]: I1210 15:52:29.091832 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6def4e7-3ec6-41f5-9ff4-1a476e10191f-utilities\") pod \"redhat-operators-j5mlp\" (UID: \"a6def4e7-3ec6-41f5-9ff4-1a476e10191f\") " pod="openshift-marketplace/redhat-operators-j5mlp" Dec 10 15:52:29 crc kubenswrapper[5114]: I1210 15:52:29.096044 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dc4rb" Dec 10 15:52:29 crc kubenswrapper[5114]: I1210 15:52:29.110733 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w2ql\" (UniqueName: \"kubernetes.io/projected/a6def4e7-3ec6-41f5-9ff4-1a476e10191f-kube-api-access-4w2ql\") pod \"redhat-operators-j5mlp\" (UID: \"a6def4e7-3ec6-41f5-9ff4-1a476e10191f\") " pod="openshift-marketplace/redhat-operators-j5mlp" Dec 10 15:52:29 crc kubenswrapper[5114]: I1210 15:52:29.308657 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j5mlp" Dec 10 15:52:29 crc kubenswrapper[5114]: I1210 15:52:29.422571 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-smk77" event={"ID":"3326c30e-e68b-4b7d-975c-bf6bdb74b04b","Type":"ContainerStarted","Data":"9a1d733cc837a39edd020e341cd88d9ab8a0b512667b03725203c2990cff5ffd"} Dec 10 15:52:29 crc kubenswrapper[5114]: I1210 15:52:29.427432 5114 generic.go:358] "Generic (PLEG): container finished" podID="f5b528d4-737f-4220-93c1-835d19f6c10d" containerID="4705c7c81f2f07cc26feca818df11d5ab7d2cbf2da33393b9e625a7c573777c6" exitCode=0 Dec 10 15:52:29 crc kubenswrapper[5114]: I1210 15:52:29.427554 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mcgrk" event={"ID":"f5b528d4-737f-4220-93c1-835d19f6c10d","Type":"ContainerDied","Data":"4705c7c81f2f07cc26feca818df11d5ab7d2cbf2da33393b9e625a7c573777c6"} Dec 10 15:52:29 crc kubenswrapper[5114]: I1210 15:52:29.443961 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-smk77" podStartSLOduration=2.7965131420000002 podStartE2EDuration="3.443939617s" podCreationTimestamp="2025-12-10 15:52:26 +0000 UTC" firstStartedPulling="2025-12-10 15:52:27.392771596 +0000 UTC m=+373.113572773" lastFinishedPulling="2025-12-10 15:52:28.040198061 +0000 UTC m=+373.760999248" observedRunningTime="2025-12-10 15:52:29.441249499 +0000 UTC m=+375.162050686" watchObservedRunningTime="2025-12-10 15:52:29.443939617 +0000 UTC m=+375.164740794" Dec 10 15:52:29 crc kubenswrapper[5114]: I1210 15:52:29.523335 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dc4rb"] Dec 10 15:52:29 crc kubenswrapper[5114]: W1210 15:52:29.529864 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2dc01581_1f70_490d_9fb7_d68483ddbe27.slice/crio-a2ec35033458eda4e5a7ab80f0ca291bdfe0fa93f1f752cc9a42a031f391468f WatchSource:0}: Error finding container a2ec35033458eda4e5a7ab80f0ca291bdfe0fa93f1f752cc9a42a031f391468f: Status 404 returned error can't find the container with id a2ec35033458eda4e5a7ab80f0ca291bdfe0fa93f1f752cc9a42a031f391468f Dec 10 15:52:29 crc kubenswrapper[5114]: I1210 15:52:29.727168 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j5mlp"] Dec 10 15:52:29 crc kubenswrapper[5114]: W1210 15:52:29.739539 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6def4e7_3ec6_41f5_9ff4_1a476e10191f.slice/crio-92f56bf2dd785c840895c2fb6f69333e6defb2a890affcfb777ec00dad50d55c WatchSource:0}: Error finding container 92f56bf2dd785c840895c2fb6f69333e6defb2a890affcfb777ec00dad50d55c: Status 404 returned error can't find the container with id 92f56bf2dd785c840895c2fb6f69333e6defb2a890affcfb777ec00dad50d55c Dec 10 15:52:30 crc kubenswrapper[5114]: I1210 15:52:30.435918 5114 generic.go:358] "Generic (PLEG): container finished" podID="a6def4e7-3ec6-41f5-9ff4-1a476e10191f" containerID="a778408b267dac298c3aab7acd716f501f68153b87c3f8f098e23e818455aea0" exitCode=0 Dec 10 15:52:30 crc kubenswrapper[5114]: I1210 15:52:30.436356 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j5mlp" event={"ID":"a6def4e7-3ec6-41f5-9ff4-1a476e10191f","Type":"ContainerDied","Data":"a778408b267dac298c3aab7acd716f501f68153b87c3f8f098e23e818455aea0"} Dec 10 15:52:30 crc kubenswrapper[5114]: I1210 15:52:30.436389 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j5mlp" event={"ID":"a6def4e7-3ec6-41f5-9ff4-1a476e10191f","Type":"ContainerStarted","Data":"92f56bf2dd785c840895c2fb6f69333e6defb2a890affcfb777ec00dad50d55c"} Dec 10 15:52:30 crc kubenswrapper[5114]: I1210 15:52:30.447915 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mcgrk" event={"ID":"f5b528d4-737f-4220-93c1-835d19f6c10d","Type":"ContainerStarted","Data":"a2434479c68542cd7c645b49fb9508b11c02cb43f560d20c8ebb64ccca1b97b7"} Dec 10 15:52:30 crc kubenswrapper[5114]: I1210 15:52:30.449491 5114 generic.go:358] "Generic (PLEG): container finished" podID="2dc01581-1f70-490d-9fb7-d68483ddbe27" containerID="bbe8a8a85642d45161a6dc4d0d73b9829a570f32ed82c65030600f609f8141ea" exitCode=0 Dec 10 15:52:30 crc kubenswrapper[5114]: I1210 15:52:30.449529 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dc4rb" event={"ID":"2dc01581-1f70-490d-9fb7-d68483ddbe27","Type":"ContainerDied","Data":"bbe8a8a85642d45161a6dc4d0d73b9829a570f32ed82c65030600f609f8141ea"} Dec 10 15:52:30 crc kubenswrapper[5114]: I1210 15:52:30.449570 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dc4rb" event={"ID":"2dc01581-1f70-490d-9fb7-d68483ddbe27","Type":"ContainerStarted","Data":"a2ec35033458eda4e5a7ab80f0ca291bdfe0fa93f1f752cc9a42a031f391468f"} Dec 10 15:52:30 crc kubenswrapper[5114]: I1210 15:52:30.488355 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mcgrk" podStartSLOduration=3.942293022 podStartE2EDuration="4.488332591s" podCreationTimestamp="2025-12-10 15:52:26 +0000 UTC" firstStartedPulling="2025-12-10 15:52:28.407873792 +0000 UTC m=+374.128674969" lastFinishedPulling="2025-12-10 15:52:28.953913351 +0000 UTC m=+374.674714538" observedRunningTime="2025-12-10 15:52:30.485102429 +0000 UTC m=+376.205903616" watchObservedRunningTime="2025-12-10 15:52:30.488332591 +0000 UTC m=+376.209133778" Dec 10 15:52:31 crc kubenswrapper[5114]: I1210 15:52:31.456381 5114 generic.go:358] "Generic (PLEG): container finished" podID="2dc01581-1f70-490d-9fb7-d68483ddbe27" containerID="1584e83eab0425f37f0879849316e88a421d9a723a1ec981bf9c014bb448ed08" exitCode=0 Dec 10 15:52:31 crc kubenswrapper[5114]: I1210 15:52:31.456464 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dc4rb" event={"ID":"2dc01581-1f70-490d-9fb7-d68483ddbe27","Type":"ContainerDied","Data":"1584e83eab0425f37f0879849316e88a421d9a723a1ec981bf9c014bb448ed08"} Dec 10 15:52:31 crc kubenswrapper[5114]: I1210 15:52:31.458109 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j5mlp" event={"ID":"a6def4e7-3ec6-41f5-9ff4-1a476e10191f","Type":"ContainerStarted","Data":"ba525299f97de9603fcb9165c47d3c517c1f90ed24bc11c0deea8e43518911ec"} Dec 10 15:52:32 crc kubenswrapper[5114]: I1210 15:52:32.467925 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dc4rb" event={"ID":"2dc01581-1f70-490d-9fb7-d68483ddbe27","Type":"ContainerStarted","Data":"b8aae3958668edf520e90a3dcc30cc99e593c986b847c2345abf049b25a35d5a"} Dec 10 15:52:32 crc kubenswrapper[5114]: I1210 15:52:32.469534 5114 generic.go:358] "Generic (PLEG): container finished" podID="a6def4e7-3ec6-41f5-9ff4-1a476e10191f" containerID="ba525299f97de9603fcb9165c47d3c517c1f90ed24bc11c0deea8e43518911ec" exitCode=0 Dec 10 15:52:32 crc kubenswrapper[5114]: I1210 15:52:32.469606 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j5mlp" event={"ID":"a6def4e7-3ec6-41f5-9ff4-1a476e10191f","Type":"ContainerDied","Data":"ba525299f97de9603fcb9165c47d3c517c1f90ed24bc11c0deea8e43518911ec"} Dec 10 15:52:32 crc kubenswrapper[5114]: I1210 15:52:32.487685 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dc4rb" podStartSLOduration=3.871427576 podStartE2EDuration="4.487668935s" podCreationTimestamp="2025-12-10 15:52:28 +0000 UTC" firstStartedPulling="2025-12-10 15:52:30.450127077 +0000 UTC m=+376.170928254" lastFinishedPulling="2025-12-10 15:52:31.066368426 +0000 UTC m=+376.787169613" observedRunningTime="2025-12-10 15:52:32.483933831 +0000 UTC m=+378.204735018" watchObservedRunningTime="2025-12-10 15:52:32.487668935 +0000 UTC m=+378.208470112" Dec 10 15:52:33 crc kubenswrapper[5114]: I1210 15:52:33.478775 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j5mlp" event={"ID":"a6def4e7-3ec6-41f5-9ff4-1a476e10191f","Type":"ContainerStarted","Data":"743257a106af7b2004a34ba9a219ba28201b7a5aeeee5170bfaefbb0fac34df8"} Dec 10 15:52:33 crc kubenswrapper[5114]: I1210 15:52:33.499558 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j5mlp" podStartSLOduration=4.913888561 podStartE2EDuration="5.499539359s" podCreationTimestamp="2025-12-10 15:52:28 +0000 UTC" firstStartedPulling="2025-12-10 15:52:30.437106909 +0000 UTC m=+376.157908086" lastFinishedPulling="2025-12-10 15:52:31.022757707 +0000 UTC m=+376.743558884" observedRunningTime="2025-12-10 15:52:33.49721036 +0000 UTC m=+379.218011537" watchObservedRunningTime="2025-12-10 15:52:33.499539359 +0000 UTC m=+379.220340536" Dec 10 15:52:36 crc kubenswrapper[5114]: I1210 15:52:36.694573 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-smk77" Dec 10 15:52:36 crc kubenswrapper[5114]: I1210 15:52:36.695186 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-smk77" Dec 10 15:52:36 crc kubenswrapper[5114]: I1210 15:52:36.733471 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-smk77" Dec 10 15:52:36 crc kubenswrapper[5114]: I1210 15:52:36.923446 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-mcgrk" Dec 10 15:52:36 crc kubenswrapper[5114]: I1210 15:52:36.923498 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mcgrk" Dec 10 15:52:37 crc kubenswrapper[5114]: I1210 15:52:37.017570 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mcgrk" Dec 10 15:52:37 crc kubenswrapper[5114]: I1210 15:52:37.532729 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-smk77" Dec 10 15:52:37 crc kubenswrapper[5114]: I1210 15:52:37.533102 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mcgrk" Dec 10 15:52:39 crc kubenswrapper[5114]: I1210 15:52:39.096779 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-dc4rb" Dec 10 15:52:39 crc kubenswrapper[5114]: I1210 15:52:39.097959 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dc4rb" Dec 10 15:52:39 crc kubenswrapper[5114]: I1210 15:52:39.151637 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dc4rb" Dec 10 15:52:39 crc kubenswrapper[5114]: I1210 15:52:39.309729 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-j5mlp" Dec 10 15:52:39 crc kubenswrapper[5114]: I1210 15:52:39.309782 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j5mlp" Dec 10 15:52:39 crc kubenswrapper[5114]: I1210 15:52:39.344142 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j5mlp" Dec 10 15:52:39 crc kubenswrapper[5114]: I1210 15:52:39.561063 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j5mlp" Dec 10 15:52:39 crc kubenswrapper[5114]: I1210 15:52:39.563136 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dc4rb" Dec 10 15:53:51 crc kubenswrapper[5114]: I1210 15:53:51.877085 5114 patch_prober.go:28] interesting pod/machine-config-daemon-pvhhc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:53:51 crc kubenswrapper[5114]: I1210 15:53:51.877719 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" podUID="b38ac556-07b2-4e25-9595-6adae4fcecb7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:54:21 crc kubenswrapper[5114]: I1210 15:54:21.876912 5114 patch_prober.go:28] interesting pod/machine-config-daemon-pvhhc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:54:21 crc kubenswrapper[5114]: I1210 15:54:21.877489 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" podUID="b38ac556-07b2-4e25-9595-6adae4fcecb7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:54:51 crc kubenswrapper[5114]: I1210 15:54:51.876630 5114 patch_prober.go:28] interesting pod/machine-config-daemon-pvhhc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:54:51 crc kubenswrapper[5114]: I1210 15:54:51.877252 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" podUID="b38ac556-07b2-4e25-9595-6adae4fcecb7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:54:51 crc kubenswrapper[5114]: I1210 15:54:51.877373 5114 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" Dec 10 15:54:51 crc kubenswrapper[5114]: I1210 15:54:51.878343 5114 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5e07ecebaefcbce405d9057363dc3db1ce0048acc851762d96cdc6cf35b9afd8"} pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 15:54:51 crc kubenswrapper[5114]: I1210 15:54:51.878459 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" podUID="b38ac556-07b2-4e25-9595-6adae4fcecb7" containerName="machine-config-daemon" containerID="cri-o://5e07ecebaefcbce405d9057363dc3db1ce0048acc851762d96cdc6cf35b9afd8" gracePeriod=600 Dec 10 15:54:52 crc kubenswrapper[5114]: I1210 15:54:52.876208 5114 generic.go:358] "Generic (PLEG): container finished" podID="b38ac556-07b2-4e25-9595-6adae4fcecb7" containerID="5e07ecebaefcbce405d9057363dc3db1ce0048acc851762d96cdc6cf35b9afd8" exitCode=0 Dec 10 15:54:52 crc kubenswrapper[5114]: I1210 15:54:52.876342 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" event={"ID":"b38ac556-07b2-4e25-9595-6adae4fcecb7","Type":"ContainerDied","Data":"5e07ecebaefcbce405d9057363dc3db1ce0048acc851762d96cdc6cf35b9afd8"} Dec 10 15:54:52 crc kubenswrapper[5114]: I1210 15:54:52.876704 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" event={"ID":"b38ac556-07b2-4e25-9595-6adae4fcecb7","Type":"ContainerStarted","Data":"f8ac6cb7db909be515720174d5ba73e527683069dfdeb99dbbc7ffd78484ea8c"} Dec 10 15:54:52 crc kubenswrapper[5114]: I1210 15:54:52.876727 5114 scope.go:117] "RemoveContainer" containerID="95aa66cb5f9214a9386ee8d4b2b98700f1848f272307ff884ab628c7ebd98b08" Dec 10 15:56:14 crc kubenswrapper[5114]: I1210 15:56:14.832435 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 10 15:56:14 crc kubenswrapper[5114]: I1210 15:56:14.841383 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 10 15:56:34 crc kubenswrapper[5114]: I1210 15:56:34.736604 5114 ???:1] "http: TLS handshake error from 192.168.126.11:45730: no serving certificate available for the kubelet" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.082515 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj"] Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.086622 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" podUID="89d5aad2-7968-4ff9-a9fa-50a133a77df8" containerName="kube-rbac-proxy" containerID="cri-o://afe25505ab1ac853897e6caebc447ad61b2a5d9dfa6bdf2d9f9d3a7bf5002e4e" gracePeriod=30 Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.086699 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" podUID="89d5aad2-7968-4ff9-a9fa-50a133a77df8" containerName="ovnkube-cluster-manager" containerID="cri-o://c60837dea96be59955b7dfa612389eee467674f0b87c6fa5a283553b24dd8382" gracePeriod=30 Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.273801 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.298365 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-g4fsb"] Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.298947 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="89d5aad2-7968-4ff9-a9fa-50a133a77df8" containerName="kube-rbac-proxy" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.298967 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="89d5aad2-7968-4ff9-a9fa-50a133a77df8" containerName="kube-rbac-proxy" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.298987 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="89d5aad2-7968-4ff9-a9fa-50a133a77df8" containerName="ovnkube-cluster-manager" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.298993 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="89d5aad2-7968-4ff9-a9fa-50a133a77df8" containerName="ovnkube-cluster-manager" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.302505 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="89d5aad2-7968-4ff9-a9fa-50a133a77df8" containerName="ovnkube-cluster-manager" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.302560 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="89d5aad2-7968-4ff9-a9fa-50a133a77df8" containerName="kube-rbac-proxy" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.306585 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-g4fsb" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.324112 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bgfnl"] Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.326428 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerName="ovn-controller" containerID="cri-o://1d9ba53c807a45e250f9fd80efef0203c360744071040db04cbdfa9d322d0b75" gracePeriod=30 Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.326523 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerName="nbdb" containerID="cri-o://2cfed98aeec135d93b96d6ec6155091f30a0164660db4d06ba4aa1ffee4edf9b" gracePeriod=30 Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.326584 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerName="northd" containerID="cri-o://7453b04cfa74e156eca43d1a9ada6956017f813682a7e71c3aadde2b561e8728" gracePeriod=30 Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.326646 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://6796f012d93b2cae04b3abdaaedd096fd68fa55bc1e88ed84566ca0f045c1add" gracePeriod=30 Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.326693 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerName="kube-rbac-proxy-node" containerID="cri-o://dcaf375a91ff6afa873a1942a68f4ef320684df70f7520248a0737cdb610ae8d" gracePeriod=30 Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.326744 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerName="ovn-acl-logging" containerID="cri-o://67d6a271759fdaaef631e0624f4fe9cc2c394c0f2ed5356595f7b6940bfa44b1" gracePeriod=30 Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.326982 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerName="sbdb" containerID="cri-o://7c5a891aeb984e12705fc11a5c58e8ab9e9a1966be0807109964e42a498c1a48" gracePeriod=30 Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.349240 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerName="ovnkube-controller" containerID="cri-o://9510cc8bb6d372e78924e4b1bf6e37e9a71cfc399a5bf10d29cb1d0573722165" gracePeriod=30 Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.349935 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/89d5aad2-7968-4ff9-a9fa-50a133a77df8-ovn-control-plane-metrics-cert\") pod \"89d5aad2-7968-4ff9-a9fa-50a133a77df8\" (UID: \"89d5aad2-7968-4ff9-a9fa-50a133a77df8\") " Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.349987 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/89d5aad2-7968-4ff9-a9fa-50a133a77df8-ovnkube-config\") pod \"89d5aad2-7968-4ff9-a9fa-50a133a77df8\" (UID: \"89d5aad2-7968-4ff9-a9fa-50a133a77df8\") " Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.350048 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/89d5aad2-7968-4ff9-a9fa-50a133a77df8-env-overrides\") pod \"89d5aad2-7968-4ff9-a9fa-50a133a77df8\" (UID: \"89d5aad2-7968-4ff9-a9fa-50a133a77df8\") " Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.350165 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkm4v\" (UniqueName: \"kubernetes.io/projected/89d5aad2-7968-4ff9-a9fa-50a133a77df8-kube-api-access-zkm4v\") pod \"89d5aad2-7968-4ff9-a9fa-50a133a77df8\" (UID: \"89d5aad2-7968-4ff9-a9fa-50a133a77df8\") " Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.351667 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89d5aad2-7968-4ff9-a9fa-50a133a77df8-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "89d5aad2-7968-4ff9-a9fa-50a133a77df8" (UID: "89d5aad2-7968-4ff9-a9fa-50a133a77df8"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.352232 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89d5aad2-7968-4ff9-a9fa-50a133a77df8-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "89d5aad2-7968-4ff9-a9fa-50a133a77df8" (UID: "89d5aad2-7968-4ff9-a9fa-50a133a77df8"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.390566 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89d5aad2-7968-4ff9-a9fa-50a133a77df8-kube-api-access-zkm4v" (OuterVolumeSpecName: "kube-api-access-zkm4v") pod "89d5aad2-7968-4ff9-a9fa-50a133a77df8" (UID: "89d5aad2-7968-4ff9-a9fa-50a133a77df8"). InnerVolumeSpecName "kube-api-access-zkm4v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.392304 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89d5aad2-7968-4ff9-a9fa-50a133a77df8-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "89d5aad2-7968-4ff9-a9fa-50a133a77df8" (UID: "89d5aad2-7968-4ff9-a9fa-50a133a77df8"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:57:21 crc kubenswrapper[5114]: E1210 15:57:21.434361 5114 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bef68a8_63de_4992_87b6_3dc6c70f5a1d.slice/crio-conmon-6796f012d93b2cae04b3abdaaedd096fd68fa55bc1e88ed84566ca0f045c1add.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bef68a8_63de_4992_87b6_3dc6c70f5a1d.slice/crio-6796f012d93b2cae04b3abdaaedd096fd68fa55bc1e88ed84566ca0f045c1add.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bef68a8_63de_4992_87b6_3dc6c70f5a1d.slice/crio-conmon-dcaf375a91ff6afa873a1942a68f4ef320684df70f7520248a0737cdb610ae8d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bef68a8_63de_4992_87b6_3dc6c70f5a1d.slice/crio-dcaf375a91ff6afa873a1942a68f4ef320684df70f7520248a0737cdb610ae8d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bef68a8_63de_4992_87b6_3dc6c70f5a1d.slice/crio-conmon-67d6a271759fdaaef631e0624f4fe9cc2c394c0f2ed5356595f7b6940bfa44b1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7c683ba_536f_45e5_89b0_fe14989cad13.slice/crio-conmon-9bc56c41fabe5c4fd3e8cb8cc42b49588c7a28d1cb287728e0ecab178f638cec.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bef68a8_63de_4992_87b6_3dc6c70f5a1d.slice/crio-conmon-1d9ba53c807a45e250f9fd80efef0203c360744071040db04cbdfa9d322d0b75.scope\": RecentStats: unable to find data in memory cache]" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.452164 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/917a5188-09fd-4e36-ba7f-2ad943861417-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-g4fsb\" (UID: \"917a5188-09fd-4e36-ba7f-2ad943861417\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-g4fsb" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.452354 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/917a5188-09fd-4e36-ba7f-2ad943861417-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-g4fsb\" (UID: \"917a5188-09fd-4e36-ba7f-2ad943861417\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-g4fsb" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.452416 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcpm5\" (UniqueName: \"kubernetes.io/projected/917a5188-09fd-4e36-ba7f-2ad943861417-kube-api-access-lcpm5\") pod \"ovnkube-control-plane-97c9b6c48-g4fsb\" (UID: \"917a5188-09fd-4e36-ba7f-2ad943861417\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-g4fsb" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.452515 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/917a5188-09fd-4e36-ba7f-2ad943861417-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-g4fsb\" (UID: \"917a5188-09fd-4e36-ba7f-2ad943861417\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-g4fsb" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.452742 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zkm4v\" (UniqueName: \"kubernetes.io/projected/89d5aad2-7968-4ff9-a9fa-50a133a77df8-kube-api-access-zkm4v\") on node \"crc\" DevicePath \"\"" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.452778 5114 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/89d5aad2-7968-4ff9-a9fa-50a133a77df8-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.452792 5114 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/89d5aad2-7968-4ff9-a9fa-50a133a77df8-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.452805 5114 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/89d5aad2-7968-4ff9-a9fa-50a133a77df8-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.553729 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/917a5188-09fd-4e36-ba7f-2ad943861417-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-g4fsb\" (UID: \"917a5188-09fd-4e36-ba7f-2ad943861417\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-g4fsb" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.554058 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lcpm5\" (UniqueName: \"kubernetes.io/projected/917a5188-09fd-4e36-ba7f-2ad943861417-kube-api-access-lcpm5\") pod \"ovnkube-control-plane-97c9b6c48-g4fsb\" (UID: \"917a5188-09fd-4e36-ba7f-2ad943861417\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-g4fsb" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.554102 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/917a5188-09fd-4e36-ba7f-2ad943861417-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-g4fsb\" (UID: \"917a5188-09fd-4e36-ba7f-2ad943861417\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-g4fsb" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.554122 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/917a5188-09fd-4e36-ba7f-2ad943861417-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-g4fsb\" (UID: \"917a5188-09fd-4e36-ba7f-2ad943861417\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-g4fsb" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.554549 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/917a5188-09fd-4e36-ba7f-2ad943861417-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-g4fsb\" (UID: \"917a5188-09fd-4e36-ba7f-2ad943861417\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-g4fsb" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.555174 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/917a5188-09fd-4e36-ba7f-2ad943861417-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-g4fsb\" (UID: \"917a5188-09fd-4e36-ba7f-2ad943861417\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-g4fsb" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.559320 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/917a5188-09fd-4e36-ba7f-2ad943861417-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-g4fsb\" (UID: \"917a5188-09fd-4e36-ba7f-2ad943861417\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-g4fsb" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.569552 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcpm5\" (UniqueName: \"kubernetes.io/projected/917a5188-09fd-4e36-ba7f-2ad943861417-kube-api-access-lcpm5\") pod \"ovnkube-control-plane-97c9b6c48-g4fsb\" (UID: \"917a5188-09fd-4e36-ba7f-2ad943861417\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-g4fsb" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.623255 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-g4fsb" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.644145 5114 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.685602 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lg6m5_e7c683ba-536f-45e5-89b0-fe14989cad13/kube-multus/0.log" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.685658 5114 generic.go:358] "Generic (PLEG): container finished" podID="e7c683ba-536f-45e5-89b0-fe14989cad13" containerID="9bc56c41fabe5c4fd3e8cb8cc42b49588c7a28d1cb287728e0ecab178f638cec" exitCode=2 Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.685707 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lg6m5" event={"ID":"e7c683ba-536f-45e5-89b0-fe14989cad13","Type":"ContainerDied","Data":"9bc56c41fabe5c4fd3e8cb8cc42b49588c7a28d1cb287728e0ecab178f638cec"} Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.686581 5114 scope.go:117] "RemoveContainer" containerID="9bc56c41fabe5c4fd3e8cb8cc42b49588c7a28d1cb287728e0ecab178f638cec" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.688809 5114 generic.go:358] "Generic (PLEG): container finished" podID="89d5aad2-7968-4ff9-a9fa-50a133a77df8" containerID="c60837dea96be59955b7dfa612389eee467674f0b87c6fa5a283553b24dd8382" exitCode=0 Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.688843 5114 generic.go:358] "Generic (PLEG): container finished" podID="89d5aad2-7968-4ff9-a9fa-50a133a77df8" containerID="afe25505ab1ac853897e6caebc447ad61b2a5d9dfa6bdf2d9f9d3a7bf5002e4e" exitCode=0 Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.688885 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.689000 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" event={"ID":"89d5aad2-7968-4ff9-a9fa-50a133a77df8","Type":"ContainerDied","Data":"c60837dea96be59955b7dfa612389eee467674f0b87c6fa5a283553b24dd8382"} Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.689051 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" event={"ID":"89d5aad2-7968-4ff9-a9fa-50a133a77df8","Type":"ContainerDied","Data":"afe25505ab1ac853897e6caebc447ad61b2a5d9dfa6bdf2d9f9d3a7bf5002e4e"} Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.689062 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj" event={"ID":"89d5aad2-7968-4ff9-a9fa-50a133a77df8","Type":"ContainerDied","Data":"715ef84ad14f85866a9983d9bff96f891290de463b18f5e8b09f2d89451140e8"} Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.689078 5114 scope.go:117] "RemoveContainer" containerID="c60837dea96be59955b7dfa612389eee467674f0b87c6fa5a283553b24dd8382" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.704938 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-g4fsb" event={"ID":"917a5188-09fd-4e36-ba7f-2ad943861417","Type":"ContainerStarted","Data":"1d629f851b155d65e30851adc1296fb1731d6868dcc46a4e0827877c19589ea7"} Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.710720 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bgfnl_5bef68a8-63de-4992-87b6-3dc6c70f5a1d/ovn-acl-logging/0.log" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.711729 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bgfnl_5bef68a8-63de-4992-87b6-3dc6c70f5a1d/ovn-controller/0.log" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.712252 5114 generic.go:358] "Generic (PLEG): container finished" podID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerID="9510cc8bb6d372e78924e4b1bf6e37e9a71cfc399a5bf10d29cb1d0573722165" exitCode=0 Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.712291 5114 generic.go:358] "Generic (PLEG): container finished" podID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerID="7c5a891aeb984e12705fc11a5c58e8ab9e9a1966be0807109964e42a498c1a48" exitCode=0 Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.712300 5114 generic.go:358] "Generic (PLEG): container finished" podID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerID="2cfed98aeec135d93b96d6ec6155091f30a0164660db4d06ba4aa1ffee4edf9b" exitCode=0 Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.712306 5114 generic.go:358] "Generic (PLEG): container finished" podID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerID="7453b04cfa74e156eca43d1a9ada6956017f813682a7e71c3aadde2b561e8728" exitCode=0 Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.712313 5114 generic.go:358] "Generic (PLEG): container finished" podID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerID="6796f012d93b2cae04b3abdaaedd096fd68fa55bc1e88ed84566ca0f045c1add" exitCode=0 Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.712320 5114 generic.go:358] "Generic (PLEG): container finished" podID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerID="dcaf375a91ff6afa873a1942a68f4ef320684df70f7520248a0737cdb610ae8d" exitCode=0 Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.712326 5114 generic.go:358] "Generic (PLEG): container finished" podID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerID="67d6a271759fdaaef631e0624f4fe9cc2c394c0f2ed5356595f7b6940bfa44b1" exitCode=143 Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.712334 5114 generic.go:358] "Generic (PLEG): container finished" podID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerID="1d9ba53c807a45e250f9fd80efef0203c360744071040db04cbdfa9d322d0b75" exitCode=143 Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.712420 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" event={"ID":"5bef68a8-63de-4992-87b6-3dc6c70f5a1d","Type":"ContainerDied","Data":"9510cc8bb6d372e78924e4b1bf6e37e9a71cfc399a5bf10d29cb1d0573722165"} Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.712453 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" event={"ID":"5bef68a8-63de-4992-87b6-3dc6c70f5a1d","Type":"ContainerDied","Data":"7c5a891aeb984e12705fc11a5c58e8ab9e9a1966be0807109964e42a498c1a48"} Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.712467 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" event={"ID":"5bef68a8-63de-4992-87b6-3dc6c70f5a1d","Type":"ContainerDied","Data":"2cfed98aeec135d93b96d6ec6155091f30a0164660db4d06ba4aa1ffee4edf9b"} Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.712478 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" event={"ID":"5bef68a8-63de-4992-87b6-3dc6c70f5a1d","Type":"ContainerDied","Data":"7453b04cfa74e156eca43d1a9ada6956017f813682a7e71c3aadde2b561e8728"} Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.712488 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" event={"ID":"5bef68a8-63de-4992-87b6-3dc6c70f5a1d","Type":"ContainerDied","Data":"6796f012d93b2cae04b3abdaaedd096fd68fa55bc1e88ed84566ca0f045c1add"} Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.712498 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" event={"ID":"5bef68a8-63de-4992-87b6-3dc6c70f5a1d","Type":"ContainerDied","Data":"dcaf375a91ff6afa873a1942a68f4ef320684df70f7520248a0737cdb610ae8d"} Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.712509 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" event={"ID":"5bef68a8-63de-4992-87b6-3dc6c70f5a1d","Type":"ContainerDied","Data":"67d6a271759fdaaef631e0624f4fe9cc2c394c0f2ed5356595f7b6940bfa44b1"} Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.712520 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" event={"ID":"5bef68a8-63de-4992-87b6-3dc6c70f5a1d","Type":"ContainerDied","Data":"1d9ba53c807a45e250f9fd80efef0203c360744071040db04cbdfa9d322d0b75"} Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.721457 5114 scope.go:117] "RemoveContainer" containerID="afe25505ab1ac853897e6caebc447ad61b2a5d9dfa6bdf2d9f9d3a7bf5002e4e" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.723616 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj"] Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.727711 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-79jfj"] Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.739573 5114 scope.go:117] "RemoveContainer" containerID="c60837dea96be59955b7dfa612389eee467674f0b87c6fa5a283553b24dd8382" Dec 10 15:57:21 crc kubenswrapper[5114]: E1210 15:57:21.740092 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c60837dea96be59955b7dfa612389eee467674f0b87c6fa5a283553b24dd8382\": container with ID starting with c60837dea96be59955b7dfa612389eee467674f0b87c6fa5a283553b24dd8382 not found: ID does not exist" containerID="c60837dea96be59955b7dfa612389eee467674f0b87c6fa5a283553b24dd8382" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.740171 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c60837dea96be59955b7dfa612389eee467674f0b87c6fa5a283553b24dd8382"} err="failed to get container status \"c60837dea96be59955b7dfa612389eee467674f0b87c6fa5a283553b24dd8382\": rpc error: code = NotFound desc = could not find container \"c60837dea96be59955b7dfa612389eee467674f0b87c6fa5a283553b24dd8382\": container with ID starting with c60837dea96be59955b7dfa612389eee467674f0b87c6fa5a283553b24dd8382 not found: ID does not exist" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.740205 5114 scope.go:117] "RemoveContainer" containerID="afe25505ab1ac853897e6caebc447ad61b2a5d9dfa6bdf2d9f9d3a7bf5002e4e" Dec 10 15:57:21 crc kubenswrapper[5114]: E1210 15:57:21.740505 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afe25505ab1ac853897e6caebc447ad61b2a5d9dfa6bdf2d9f9d3a7bf5002e4e\": container with ID starting with afe25505ab1ac853897e6caebc447ad61b2a5d9dfa6bdf2d9f9d3a7bf5002e4e not found: ID does not exist" containerID="afe25505ab1ac853897e6caebc447ad61b2a5d9dfa6bdf2d9f9d3a7bf5002e4e" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.740525 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afe25505ab1ac853897e6caebc447ad61b2a5d9dfa6bdf2d9f9d3a7bf5002e4e"} err="failed to get container status \"afe25505ab1ac853897e6caebc447ad61b2a5d9dfa6bdf2d9f9d3a7bf5002e4e\": rpc error: code = NotFound desc = could not find container \"afe25505ab1ac853897e6caebc447ad61b2a5d9dfa6bdf2d9f9d3a7bf5002e4e\": container with ID starting with afe25505ab1ac853897e6caebc447ad61b2a5d9dfa6bdf2d9f9d3a7bf5002e4e not found: ID does not exist" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.740563 5114 scope.go:117] "RemoveContainer" containerID="c60837dea96be59955b7dfa612389eee467674f0b87c6fa5a283553b24dd8382" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.740741 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c60837dea96be59955b7dfa612389eee467674f0b87c6fa5a283553b24dd8382"} err="failed to get container status \"c60837dea96be59955b7dfa612389eee467674f0b87c6fa5a283553b24dd8382\": rpc error: code = NotFound desc = could not find container \"c60837dea96be59955b7dfa612389eee467674f0b87c6fa5a283553b24dd8382\": container with ID starting with c60837dea96be59955b7dfa612389eee467674f0b87c6fa5a283553b24dd8382 not found: ID does not exist" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.740758 5114 scope.go:117] "RemoveContainer" containerID="afe25505ab1ac853897e6caebc447ad61b2a5d9dfa6bdf2d9f9d3a7bf5002e4e" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.741029 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afe25505ab1ac853897e6caebc447ad61b2a5d9dfa6bdf2d9f9d3a7bf5002e4e"} err="failed to get container status \"afe25505ab1ac853897e6caebc447ad61b2a5d9dfa6bdf2d9f9d3a7bf5002e4e\": rpc error: code = NotFound desc = could not find container \"afe25505ab1ac853897e6caebc447ad61b2a5d9dfa6bdf2d9f9d3a7bf5002e4e\": container with ID starting with afe25505ab1ac853897e6caebc447ad61b2a5d9dfa6bdf2d9f9d3a7bf5002e4e not found: ID does not exist" Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.876059 5114 patch_prober.go:28] interesting pod/machine-config-daemon-pvhhc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:57:21 crc kubenswrapper[5114]: I1210 15:57:21.876130 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" podUID="b38ac556-07b2-4e25-9595-6adae4fcecb7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.004488 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bgfnl_5bef68a8-63de-4992-87b6-3dc6c70f5a1d/ovn-acl-logging/0.log" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.004961 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bgfnl_5bef68a8-63de-4992-87b6-3dc6c70f5a1d/ovn-controller/0.log" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.005670 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.060144 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pr4ch"] Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.060698 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerName="kubecfg-setup" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.060719 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerName="kubecfg-setup" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.060732 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerName="kube-rbac-proxy-ovn-metrics" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.060739 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerName="kube-rbac-proxy-ovn-metrics" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.060752 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerName="kube-rbac-proxy-node" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.060758 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerName="kube-rbac-proxy-node" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.060766 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerName="ovnkube-controller" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.060771 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerName="ovnkube-controller" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.060778 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerName="ovn-acl-logging" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.060784 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerName="ovn-acl-logging" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.060789 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerName="sbdb" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.060794 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerName="sbdb" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.060817 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerName="ovn-controller" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.060823 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerName="ovn-controller" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.060834 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerName="northd" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.060839 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerName="northd" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.060844 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerName="nbdb" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.060849 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerName="nbdb" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.060935 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerName="kube-rbac-proxy-node" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.060944 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerName="ovn-controller" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.060949 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerName="sbdb" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.060959 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerName="ovnkube-controller" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.060967 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerName="ovn-acl-logging" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.060974 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerName="kube-rbac-proxy-ovn-metrics" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.060981 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerName="nbdb" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.060988 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" containerName="northd" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.066158 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.160017 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-run-ovn\") pod \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.161221 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-ovnkube-config\") pod \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.161251 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-run-netns\") pod \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.160114 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "5bef68a8-63de-4992-87b6-3dc6c70f5a1d" (UID: "5bef68a8-63de-4992-87b6-3dc6c70f5a1d"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.161301 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgklm\" (UniqueName: \"kubernetes.io/projected/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-kube-api-access-xgklm\") pod \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.161323 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-kubelet\") pod \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.161337 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-log-socket\") pod \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.161333 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "5bef68a8-63de-4992-87b6-3dc6c70f5a1d" (UID: "5bef68a8-63de-4992-87b6-3dc6c70f5a1d"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.161349 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-run-openvswitch\") pod \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.161365 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "5bef68a8-63de-4992-87b6-3dc6c70f5a1d" (UID: "5bef68a8-63de-4992-87b6-3dc6c70f5a1d"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.161434 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "5bef68a8-63de-4992-87b6-3dc6c70f5a1d" (UID: "5bef68a8-63de-4992-87b6-3dc6c70f5a1d"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.161369 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-ovn-node-metrics-cert\") pod \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.161460 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-log-socket" (OuterVolumeSpecName: "log-socket") pod "5bef68a8-63de-4992-87b6-3dc6c70f5a1d" (UID: "5bef68a8-63de-4992-87b6-3dc6c70f5a1d"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.161489 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-slash\") pod \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.161520 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-slash" (OuterVolumeSpecName: "host-slash") pod "5bef68a8-63de-4992-87b6-3dc6c70f5a1d" (UID: "5bef68a8-63de-4992-87b6-3dc6c70f5a1d"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.161580 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-ovnkube-script-lib\") pod \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.161645 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-run-systemd\") pod \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.161677 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-cni-netd\") pod \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.161702 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-run-ovn-kubernetes\") pod \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.161725 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-var-lib-openvswitch\") pod \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.161751 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "5bef68a8-63de-4992-87b6-3dc6c70f5a1d" (UID: "5bef68a8-63de-4992-87b6-3dc6c70f5a1d"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.161771 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-cni-bin\") pod \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.161783 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "5bef68a8-63de-4992-87b6-3dc6c70f5a1d" (UID: "5bef68a8-63de-4992-87b6-3dc6c70f5a1d"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.161808 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "5bef68a8-63de-4992-87b6-3dc6c70f5a1d" (UID: "5bef68a8-63de-4992-87b6-3dc6c70f5a1d"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.161810 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.161827 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "5bef68a8-63de-4992-87b6-3dc6c70f5a1d" (UID: "5bef68a8-63de-4992-87b6-3dc6c70f5a1d"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.161846 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-etc-openvswitch\") pod \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.161881 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-env-overrides\") pod \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.161914 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "5bef68a8-63de-4992-87b6-3dc6c70f5a1d" (UID: "5bef68a8-63de-4992-87b6-3dc6c70f5a1d"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.161919 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-node-log\") pod \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.161923 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "5bef68a8-63de-4992-87b6-3dc6c70f5a1d" (UID: "5bef68a8-63de-4992-87b6-3dc6c70f5a1d"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.161938 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "5bef68a8-63de-4992-87b6-3dc6c70f5a1d" (UID: "5bef68a8-63de-4992-87b6-3dc6c70f5a1d"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.161955 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-systemd-units\") pod \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\" (UID: \"5bef68a8-63de-4992-87b6-3dc6c70f5a1d\") " Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.161976 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-node-log" (OuterVolumeSpecName: "node-log") pod "5bef68a8-63de-4992-87b6-3dc6c70f5a1d" (UID: "5bef68a8-63de-4992-87b6-3dc6c70f5a1d"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.162071 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "5bef68a8-63de-4992-87b6-3dc6c70f5a1d" (UID: "5bef68a8-63de-4992-87b6-3dc6c70f5a1d"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.162089 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-log-socket\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.162144 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.162220 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-host-slash\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.162261 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-host-run-ovn-kubernetes\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.162335 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-node-log\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.162384 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-run-systemd\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.162414 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a3883007-b4cf-4de4-a639-85c015110445-ovnkube-config\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.162460 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-host-run-netns\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.162542 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a3883007-b4cf-4de4-a639-85c015110445-env-overrides\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.162598 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-host-cni-bin\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.162616 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "5bef68a8-63de-4992-87b6-3dc6c70f5a1d" (UID: "5bef68a8-63de-4992-87b6-3dc6c70f5a1d"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.162620 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a3883007-b4cf-4de4-a639-85c015110445-ovnkube-script-lib\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.162674 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a3883007-b4cf-4de4-a639-85c015110445-ovn-node-metrics-cert\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.162704 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh569\" (UniqueName: \"kubernetes.io/projected/a3883007-b4cf-4de4-a639-85c015110445-kube-api-access-wh569\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.162739 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-etc-openvswitch\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.162769 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-systemd-units\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.162769 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "5bef68a8-63de-4992-87b6-3dc6c70f5a1d" (UID: "5bef68a8-63de-4992-87b6-3dc6c70f5a1d"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.162796 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-host-kubelet\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.162818 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-var-lib-openvswitch\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.162838 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-run-openvswitch\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.162863 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-run-ovn\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.162889 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-host-cni-netd\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.162959 5114 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-node-log\") on node \"crc\" DevicePath \"\"" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.162969 5114 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.162980 5114 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.162989 5114 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.162997 5114 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.163005 5114 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.163013 5114 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-log-socket\") on node \"crc\" DevicePath \"\"" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.163020 5114 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.163027 5114 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-slash\") on node \"crc\" DevicePath \"\"" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.163035 5114 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.163043 5114 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.163052 5114 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.163060 5114 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.163068 5114 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.163078 5114 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.163086 5114 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.163094 5114 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.165738 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-kube-api-access-xgklm" (OuterVolumeSpecName: "kube-api-access-xgklm") pod "5bef68a8-63de-4992-87b6-3dc6c70f5a1d" (UID: "5bef68a8-63de-4992-87b6-3dc6c70f5a1d"). InnerVolumeSpecName "kube-api-access-xgklm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.165992 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "5bef68a8-63de-4992-87b6-3dc6c70f5a1d" (UID: "5bef68a8-63de-4992-87b6-3dc6c70f5a1d"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.173154 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "5bef68a8-63de-4992-87b6-3dc6c70f5a1d" (UID: "5bef68a8-63de-4992-87b6-3dc6c70f5a1d"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.264539 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-log-socket\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.264589 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.264607 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-host-slash\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.264626 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-host-run-ovn-kubernetes\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.264677 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-node-log\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.264695 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-log-socket\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.264689 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-host-slash\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.264718 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.264737 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-run-systemd\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.264759 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-node-log\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.264769 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a3883007-b4cf-4de4-a639-85c015110445-ovnkube-config\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.264779 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-host-run-ovn-kubernetes\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.264801 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-host-run-netns\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.264824 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a3883007-b4cf-4de4-a639-85c015110445-env-overrides\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.264836 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-run-systemd\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.264842 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-host-cni-bin\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.264859 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-host-cni-bin\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.264867 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a3883007-b4cf-4de4-a639-85c015110445-ovnkube-script-lib\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.264883 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-host-run-netns\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.264888 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a3883007-b4cf-4de4-a639-85c015110445-ovn-node-metrics-cert\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.264921 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wh569\" (UniqueName: \"kubernetes.io/projected/a3883007-b4cf-4de4-a639-85c015110445-kube-api-access-wh569\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.264943 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-etc-openvswitch\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.264966 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-systemd-units\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.264984 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-host-kubelet\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.265005 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-var-lib-openvswitch\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.265025 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-run-openvswitch\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.265055 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-run-ovn\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.265078 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-host-cni-netd\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.265119 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xgklm\" (UniqueName: \"kubernetes.io/projected/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-kube-api-access-xgklm\") on node \"crc\" DevicePath \"\"" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.265133 5114 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.265146 5114 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5bef68a8-63de-4992-87b6-3dc6c70f5a1d-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.265180 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-host-cni-netd\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.265487 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-systemd-units\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.265509 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a3883007-b4cf-4de4-a639-85c015110445-env-overrides\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.265549 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-etc-openvswitch\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.265598 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-host-kubelet\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.265598 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-run-openvswitch\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.265629 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a3883007-b4cf-4de4-a639-85c015110445-ovnkube-config\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.265665 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-run-ovn\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.265670 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a3883007-b4cf-4de4-a639-85c015110445-var-lib-openvswitch\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.265777 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a3883007-b4cf-4de4-a639-85c015110445-ovnkube-script-lib\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.269365 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a3883007-b4cf-4de4-a639-85c015110445-ovn-node-metrics-cert\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.281902 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh569\" (UniqueName: \"kubernetes.io/projected/a3883007-b4cf-4de4-a639-85c015110445-kube-api-access-wh569\") pod \"ovnkube-node-pr4ch\" (UID: \"a3883007-b4cf-4de4-a639-85c015110445\") " pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.380261 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:22 crc kubenswrapper[5114]: W1210 15:57:22.396607 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3883007_b4cf_4de4_a639_85c015110445.slice/crio-24b6dfa10d41fb8a490915044750ea562d70b29a424dafcab2c48be50cbdbd66 WatchSource:0}: Error finding container 24b6dfa10d41fb8a490915044750ea562d70b29a424dafcab2c48be50cbdbd66: Status 404 returned error can't find the container with id 24b6dfa10d41fb8a490915044750ea562d70b29a424dafcab2c48be50cbdbd66 Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.578442 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89d5aad2-7968-4ff9-a9fa-50a133a77df8" path="/var/lib/kubelet/pods/89d5aad2-7968-4ff9-a9fa-50a133a77df8/volumes" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.722005 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lg6m5_e7c683ba-536f-45e5-89b0-fe14989cad13/kube-multus/0.log" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.722135 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lg6m5" event={"ID":"e7c683ba-536f-45e5-89b0-fe14989cad13","Type":"ContainerStarted","Data":"a1e8041fcff6812f0c1965018d68b4784131eb9c3eaa7d57befeefbbf1f6aa5d"} Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.725637 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-g4fsb" event={"ID":"917a5188-09fd-4e36-ba7f-2ad943861417","Type":"ContainerStarted","Data":"4e0020bac2c4e7783a4bcee9dffcdfd9c2f7ce3e6ff7aa0c4e5aab4646d7d91b"} Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.725698 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-g4fsb" event={"ID":"917a5188-09fd-4e36-ba7f-2ad943861417","Type":"ContainerStarted","Data":"aaec482fe1951f98ea6b4f14d0a5c96ecd0e9a6abbd767e708fc0ce56e47c24d"} Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.727968 5114 generic.go:358] "Generic (PLEG): container finished" podID="a3883007-b4cf-4de4-a639-85c015110445" containerID="833843a9dcadbccdb9e9721872598477430946b85ccef357b503799466f5bfe7" exitCode=0 Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.728063 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" event={"ID":"a3883007-b4cf-4de4-a639-85c015110445","Type":"ContainerDied","Data":"833843a9dcadbccdb9e9721872598477430946b85ccef357b503799466f5bfe7"} Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.728336 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" event={"ID":"a3883007-b4cf-4de4-a639-85c015110445","Type":"ContainerStarted","Data":"24b6dfa10d41fb8a490915044750ea562d70b29a424dafcab2c48be50cbdbd66"} Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.731905 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bgfnl_5bef68a8-63de-4992-87b6-3dc6c70f5a1d/ovn-acl-logging/0.log" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.732616 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bgfnl_5bef68a8-63de-4992-87b6-3dc6c70f5a1d/ovn-controller/0.log" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.733153 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" event={"ID":"5bef68a8-63de-4992-87b6-3dc6c70f5a1d","Type":"ContainerDied","Data":"a199cf1da8fb7790abbb3c746b8ca2bfcd2d855529f7600f1eee455a9ec8496b"} Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.733196 5114 scope.go:117] "RemoveContainer" containerID="9510cc8bb6d372e78924e4b1bf6e37e9a71cfc399a5bf10d29cb1d0573722165" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.733495 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bgfnl" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.755284 5114 scope.go:117] "RemoveContainer" containerID="7c5a891aeb984e12705fc11a5c58e8ab9e9a1966be0807109964e42a498c1a48" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.761173 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bgfnl"] Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.764893 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bgfnl"] Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.779363 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-g4fsb" podStartSLOduration=1.779337419 podStartE2EDuration="1.779337419s" podCreationTimestamp="2025-12-10 15:57:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:57:22.776692883 +0000 UTC m=+668.497494080" watchObservedRunningTime="2025-12-10 15:57:22.779337419 +0000 UTC m=+668.500138596" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.784076 5114 scope.go:117] "RemoveContainer" containerID="2cfed98aeec135d93b96d6ec6155091f30a0164660db4d06ba4aa1ffee4edf9b" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.801040 5114 scope.go:117] "RemoveContainer" containerID="7453b04cfa74e156eca43d1a9ada6956017f813682a7e71c3aadde2b561e8728" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.816941 5114 scope.go:117] "RemoveContainer" containerID="6796f012d93b2cae04b3abdaaedd096fd68fa55bc1e88ed84566ca0f045c1add" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.830507 5114 scope.go:117] "RemoveContainer" containerID="dcaf375a91ff6afa873a1942a68f4ef320684df70f7520248a0737cdb610ae8d" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.851798 5114 scope.go:117] "RemoveContainer" containerID="67d6a271759fdaaef631e0624f4fe9cc2c394c0f2ed5356595f7b6940bfa44b1" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.873041 5114 scope.go:117] "RemoveContainer" containerID="1d9ba53c807a45e250f9fd80efef0203c360744071040db04cbdfa9d322d0b75" Dec 10 15:57:22 crc kubenswrapper[5114]: I1210 15:57:22.890189 5114 scope.go:117] "RemoveContainer" containerID="b38448aaca5bba30a396046b9ada6c007e6433b291ac82aba2d547ae273e0124" Dec 10 15:57:23 crc kubenswrapper[5114]: I1210 15:57:23.743496 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" event={"ID":"a3883007-b4cf-4de4-a639-85c015110445","Type":"ContainerStarted","Data":"09cb3eb74a39947b3c6d4aecdf46143a24ab7384f5553ade52339300ee466008"} Dec 10 15:57:23 crc kubenswrapper[5114]: I1210 15:57:23.743796 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" event={"ID":"a3883007-b4cf-4de4-a639-85c015110445","Type":"ContainerStarted","Data":"dd23729c41b9a102cff8dd98cfa852920fe6586e2ec704b175764f3270853f4b"} Dec 10 15:57:23 crc kubenswrapper[5114]: I1210 15:57:23.743807 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" event={"ID":"a3883007-b4cf-4de4-a639-85c015110445","Type":"ContainerStarted","Data":"67e84ef11247f41aafc2710017c84baf333ab8fe780e65c042adfe4474b15480"} Dec 10 15:57:23 crc kubenswrapper[5114]: I1210 15:57:23.743817 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" event={"ID":"a3883007-b4cf-4de4-a639-85c015110445","Type":"ContainerStarted","Data":"62471317ca791fab6ef4d2f177b10473c9080c346aa22578c5446e5cf4bab381"} Dec 10 15:57:23 crc kubenswrapper[5114]: I1210 15:57:23.743828 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" event={"ID":"a3883007-b4cf-4de4-a639-85c015110445","Type":"ContainerStarted","Data":"a748ab7b924105e1249b737db21a20c7a2793001ed56f7b73ebc4ae99844f48b"} Dec 10 15:57:23 crc kubenswrapper[5114]: I1210 15:57:23.743847 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" event={"ID":"a3883007-b4cf-4de4-a639-85c015110445","Type":"ContainerStarted","Data":"a773e2f5e52599beff187df7b4d0f82c855ee2e26f19e18213111feb38be6160"} Dec 10 15:57:24 crc kubenswrapper[5114]: I1210 15:57:24.579182 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bef68a8-63de-4992-87b6-3dc6c70f5a1d" path="/var/lib/kubelet/pods/5bef68a8-63de-4992-87b6-3dc6c70f5a1d/volumes" Dec 10 15:57:25 crc kubenswrapper[5114]: I1210 15:57:25.758109 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" event={"ID":"a3883007-b4cf-4de4-a639-85c015110445","Type":"ContainerStarted","Data":"62e60b4ef200ff57762f7bdec0f4abe769fe5dbc13dc58052780c37dd38f058e"} Dec 10 15:57:29 crc kubenswrapper[5114]: I1210 15:57:29.790947 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" event={"ID":"a3883007-b4cf-4de4-a639-85c015110445","Type":"ContainerStarted","Data":"7add299dfe6cde15a89352b9e4f8edbfaa74827255288f092bbc7cd7d68fe170"} Dec 10 15:57:29 crc kubenswrapper[5114]: I1210 15:57:29.791611 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:29 crc kubenswrapper[5114]: I1210 15:57:29.791640 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:29 crc kubenswrapper[5114]: I1210 15:57:29.818777 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:29 crc kubenswrapper[5114]: I1210 15:57:29.820489 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" podStartSLOduration=7.82047472 podStartE2EDuration="7.82047472s" podCreationTimestamp="2025-12-10 15:57:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:57:29.817809002 +0000 UTC m=+675.538610179" watchObservedRunningTime="2025-12-10 15:57:29.82047472 +0000 UTC m=+675.541275897" Dec 10 15:57:30 crc kubenswrapper[5114]: I1210 15:57:30.796226 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:30 crc kubenswrapper[5114]: I1210 15:57:30.821141 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:57:51 crc kubenswrapper[5114]: I1210 15:57:51.877003 5114 patch_prober.go:28] interesting pod/machine-config-daemon-pvhhc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:57:51 crc kubenswrapper[5114]: I1210 15:57:51.877587 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" podUID="b38ac556-07b2-4e25-9595-6adae4fcecb7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:58:02 crc kubenswrapper[5114]: I1210 15:58:02.834327 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pr4ch" Dec 10 15:58:21 crc kubenswrapper[5114]: I1210 15:58:21.877395 5114 patch_prober.go:28] interesting pod/machine-config-daemon-pvhhc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:58:21 crc kubenswrapper[5114]: I1210 15:58:21.877973 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" podUID="b38ac556-07b2-4e25-9595-6adae4fcecb7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:58:21 crc kubenswrapper[5114]: I1210 15:58:21.878025 5114 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" Dec 10 15:58:21 crc kubenswrapper[5114]: I1210 15:58:21.878716 5114 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f8ac6cb7db909be515720174d5ba73e527683069dfdeb99dbbc7ffd78484ea8c"} pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 15:58:21 crc kubenswrapper[5114]: I1210 15:58:21.878793 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" podUID="b38ac556-07b2-4e25-9595-6adae4fcecb7" containerName="machine-config-daemon" containerID="cri-o://f8ac6cb7db909be515720174d5ba73e527683069dfdeb99dbbc7ffd78484ea8c" gracePeriod=600 Dec 10 15:58:22 crc kubenswrapper[5114]: I1210 15:58:22.097385 5114 generic.go:358] "Generic (PLEG): container finished" podID="b38ac556-07b2-4e25-9595-6adae4fcecb7" containerID="f8ac6cb7db909be515720174d5ba73e527683069dfdeb99dbbc7ffd78484ea8c" exitCode=0 Dec 10 15:58:22 crc kubenswrapper[5114]: I1210 15:58:22.097500 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" event={"ID":"b38ac556-07b2-4e25-9595-6adae4fcecb7","Type":"ContainerDied","Data":"f8ac6cb7db909be515720174d5ba73e527683069dfdeb99dbbc7ffd78484ea8c"} Dec 10 15:58:22 crc kubenswrapper[5114]: I1210 15:58:22.097861 5114 scope.go:117] "RemoveContainer" containerID="5e07ecebaefcbce405d9057363dc3db1ce0048acc851762d96cdc6cf35b9afd8" Dec 10 15:58:23 crc kubenswrapper[5114]: I1210 15:58:23.104595 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" event={"ID":"b38ac556-07b2-4e25-9595-6adae4fcecb7","Type":"ContainerStarted","Data":"32e0cfb2943a8eeb1eb14112edafb4219bb4d51ca24ba6abc85b691ebf51d97a"} Dec 10 15:58:27 crc kubenswrapper[5114]: I1210 15:58:27.044149 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dc4rb"] Dec 10 15:58:27 crc kubenswrapper[5114]: I1210 15:58:27.044832 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dc4rb" podUID="2dc01581-1f70-490d-9fb7-d68483ddbe27" containerName="registry-server" containerID="cri-o://b8aae3958668edf520e90a3dcc30cc99e593c986b847c2345abf049b25a35d5a" gracePeriod=30 Dec 10 15:58:27 crc kubenswrapper[5114]: I1210 15:58:27.399475 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dc4rb" Dec 10 15:58:27 crc kubenswrapper[5114]: I1210 15:58:27.500233 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dc01581-1f70-490d-9fb7-d68483ddbe27-utilities\") pod \"2dc01581-1f70-490d-9fb7-d68483ddbe27\" (UID: \"2dc01581-1f70-490d-9fb7-d68483ddbe27\") " Dec 10 15:58:27 crc kubenswrapper[5114]: I1210 15:58:27.500346 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrwjj\" (UniqueName: \"kubernetes.io/projected/2dc01581-1f70-490d-9fb7-d68483ddbe27-kube-api-access-lrwjj\") pod \"2dc01581-1f70-490d-9fb7-d68483ddbe27\" (UID: \"2dc01581-1f70-490d-9fb7-d68483ddbe27\") " Dec 10 15:58:27 crc kubenswrapper[5114]: I1210 15:58:27.500385 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dc01581-1f70-490d-9fb7-d68483ddbe27-catalog-content\") pod \"2dc01581-1f70-490d-9fb7-d68483ddbe27\" (UID: \"2dc01581-1f70-490d-9fb7-d68483ddbe27\") " Dec 10 15:58:27 crc kubenswrapper[5114]: I1210 15:58:27.502185 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dc01581-1f70-490d-9fb7-d68483ddbe27-utilities" (OuterVolumeSpecName: "utilities") pod "2dc01581-1f70-490d-9fb7-d68483ddbe27" (UID: "2dc01581-1f70-490d-9fb7-d68483ddbe27"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:58:27 crc kubenswrapper[5114]: I1210 15:58:27.508195 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dc01581-1f70-490d-9fb7-d68483ddbe27-kube-api-access-lrwjj" (OuterVolumeSpecName: "kube-api-access-lrwjj") pod "2dc01581-1f70-490d-9fb7-d68483ddbe27" (UID: "2dc01581-1f70-490d-9fb7-d68483ddbe27"). InnerVolumeSpecName "kube-api-access-lrwjj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:58:27 crc kubenswrapper[5114]: I1210 15:58:27.510949 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dc01581-1f70-490d-9fb7-d68483ddbe27-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2dc01581-1f70-490d-9fb7-d68483ddbe27" (UID: "2dc01581-1f70-490d-9fb7-d68483ddbe27"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:58:27 crc kubenswrapper[5114]: I1210 15:58:27.601580 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dc01581-1f70-490d-9fb7-d68483ddbe27-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:58:27 crc kubenswrapper[5114]: I1210 15:58:27.601630 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lrwjj\" (UniqueName: \"kubernetes.io/projected/2dc01581-1f70-490d-9fb7-d68483ddbe27-kube-api-access-lrwjj\") on node \"crc\" DevicePath \"\"" Dec 10 15:58:27 crc kubenswrapper[5114]: I1210 15:58:27.601644 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dc01581-1f70-490d-9fb7-d68483ddbe27-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.130074 5114 generic.go:358] "Generic (PLEG): container finished" podID="2dc01581-1f70-490d-9fb7-d68483ddbe27" containerID="b8aae3958668edf520e90a3dcc30cc99e593c986b847c2345abf049b25a35d5a" exitCode=0 Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.130198 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dc4rb" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.130313 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dc4rb" event={"ID":"2dc01581-1f70-490d-9fb7-d68483ddbe27","Type":"ContainerDied","Data":"b8aae3958668edf520e90a3dcc30cc99e593c986b847c2345abf049b25a35d5a"} Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.130344 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dc4rb" event={"ID":"2dc01581-1f70-490d-9fb7-d68483ddbe27","Type":"ContainerDied","Data":"a2ec35033458eda4e5a7ab80f0ca291bdfe0fa93f1f752cc9a42a031f391468f"} Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.130362 5114 scope.go:117] "RemoveContainer" containerID="b8aae3958668edf520e90a3dcc30cc99e593c986b847c2345abf049b25a35d5a" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.137640 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-wwk29"] Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.138215 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2dc01581-1f70-490d-9fb7-d68483ddbe27" containerName="registry-server" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.138241 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dc01581-1f70-490d-9fb7-d68483ddbe27" containerName="registry-server" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.138260 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2dc01581-1f70-490d-9fb7-d68483ddbe27" containerName="extract-utilities" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.138268 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dc01581-1f70-490d-9fb7-d68483ddbe27" containerName="extract-utilities" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.138283 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2dc01581-1f70-490d-9fb7-d68483ddbe27" containerName="extract-content" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.138288 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dc01581-1f70-490d-9fb7-d68483ddbe27" containerName="extract-content" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.138407 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="2dc01581-1f70-490d-9fb7-d68483ddbe27" containerName="registry-server" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.148574 5114 scope.go:117] "RemoveContainer" containerID="1584e83eab0425f37f0879849316e88a421d9a723a1ec981bf9c014bb448ed08" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.168726 5114 scope.go:117] "RemoveContainer" containerID="bbe8a8a85642d45161a6dc4d0d73b9829a570f32ed82c65030600f609f8141ea" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.170848 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-wwk29"] Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.170982 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-wwk29" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.191507 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dc4rb"] Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.195195 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dc4rb"] Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.197602 5114 scope.go:117] "RemoveContainer" containerID="b8aae3958668edf520e90a3dcc30cc99e593c986b847c2345abf049b25a35d5a" Dec 10 15:58:28 crc kubenswrapper[5114]: E1210 15:58:28.197975 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8aae3958668edf520e90a3dcc30cc99e593c986b847c2345abf049b25a35d5a\": container with ID starting with b8aae3958668edf520e90a3dcc30cc99e593c986b847c2345abf049b25a35d5a not found: ID does not exist" containerID="b8aae3958668edf520e90a3dcc30cc99e593c986b847c2345abf049b25a35d5a" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.198004 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8aae3958668edf520e90a3dcc30cc99e593c986b847c2345abf049b25a35d5a"} err="failed to get container status \"b8aae3958668edf520e90a3dcc30cc99e593c986b847c2345abf049b25a35d5a\": rpc error: code = NotFound desc = could not find container \"b8aae3958668edf520e90a3dcc30cc99e593c986b847c2345abf049b25a35d5a\": container with ID starting with b8aae3958668edf520e90a3dcc30cc99e593c986b847c2345abf049b25a35d5a not found: ID does not exist" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.198023 5114 scope.go:117] "RemoveContainer" containerID="1584e83eab0425f37f0879849316e88a421d9a723a1ec981bf9c014bb448ed08" Dec 10 15:58:28 crc kubenswrapper[5114]: E1210 15:58:28.198188 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1584e83eab0425f37f0879849316e88a421d9a723a1ec981bf9c014bb448ed08\": container with ID starting with 1584e83eab0425f37f0879849316e88a421d9a723a1ec981bf9c014bb448ed08 not found: ID does not exist" containerID="1584e83eab0425f37f0879849316e88a421d9a723a1ec981bf9c014bb448ed08" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.198208 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1584e83eab0425f37f0879849316e88a421d9a723a1ec981bf9c014bb448ed08"} err="failed to get container status \"1584e83eab0425f37f0879849316e88a421d9a723a1ec981bf9c014bb448ed08\": rpc error: code = NotFound desc = could not find container \"1584e83eab0425f37f0879849316e88a421d9a723a1ec981bf9c014bb448ed08\": container with ID starting with 1584e83eab0425f37f0879849316e88a421d9a723a1ec981bf9c014bb448ed08 not found: ID does not exist" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.198222 5114 scope.go:117] "RemoveContainer" containerID="bbe8a8a85642d45161a6dc4d0d73b9829a570f32ed82c65030600f609f8141ea" Dec 10 15:58:28 crc kubenswrapper[5114]: E1210 15:58:28.198392 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbe8a8a85642d45161a6dc4d0d73b9829a570f32ed82c65030600f609f8141ea\": container with ID starting with bbe8a8a85642d45161a6dc4d0d73b9829a570f32ed82c65030600f609f8141ea not found: ID does not exist" containerID="bbe8a8a85642d45161a6dc4d0d73b9829a570f32ed82c65030600f609f8141ea" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.198413 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbe8a8a85642d45161a6dc4d0d73b9829a570f32ed82c65030600f609f8141ea"} err="failed to get container status \"bbe8a8a85642d45161a6dc4d0d73b9829a570f32ed82c65030600f609f8141ea\": rpc error: code = NotFound desc = could not find container \"bbe8a8a85642d45161a6dc4d0d73b9829a570f32ed82c65030600f609f8141ea\": container with ID starting with bbe8a8a85642d45161a6dc4d0d73b9829a570f32ed82c65030600f609f8141ea not found: ID does not exist" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.208287 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dc3c5d26-1a0f-47d6-88f0-963016bdcba6-registry-tls\") pod \"image-registry-5d9d95bf5b-wwk29\" (UID: \"dc3c5d26-1a0f-47d6-88f0-963016bdcba6\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wwk29" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.208349 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc3c5d26-1a0f-47d6-88f0-963016bdcba6-trusted-ca\") pod \"image-registry-5d9d95bf5b-wwk29\" (UID: \"dc3c5d26-1a0f-47d6-88f0-963016bdcba6\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wwk29" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.208393 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dc3c5d26-1a0f-47d6-88f0-963016bdcba6-registry-certificates\") pod \"image-registry-5d9d95bf5b-wwk29\" (UID: \"dc3c5d26-1a0f-47d6-88f0-963016bdcba6\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wwk29" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.208418 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dc3c5d26-1a0f-47d6-88f0-963016bdcba6-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-wwk29\" (UID: \"dc3c5d26-1a0f-47d6-88f0-963016bdcba6\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wwk29" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.208439 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dc3c5d26-1a0f-47d6-88f0-963016bdcba6-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-wwk29\" (UID: \"dc3c5d26-1a0f-47d6-88f0-963016bdcba6\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wwk29" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.208466 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb62g\" (UniqueName: \"kubernetes.io/projected/dc3c5d26-1a0f-47d6-88f0-963016bdcba6-kube-api-access-sb62g\") pod \"image-registry-5d9d95bf5b-wwk29\" (UID: \"dc3c5d26-1a0f-47d6-88f0-963016bdcba6\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wwk29" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.208495 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-wwk29\" (UID: \"dc3c5d26-1a0f-47d6-88f0-963016bdcba6\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wwk29" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.208519 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dc3c5d26-1a0f-47d6-88f0-963016bdcba6-bound-sa-token\") pod \"image-registry-5d9d95bf5b-wwk29\" (UID: \"dc3c5d26-1a0f-47d6-88f0-963016bdcba6\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wwk29" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.249674 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-wwk29\" (UID: \"dc3c5d26-1a0f-47d6-88f0-963016bdcba6\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wwk29" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.310061 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dc3c5d26-1a0f-47d6-88f0-963016bdcba6-bound-sa-token\") pod \"image-registry-5d9d95bf5b-wwk29\" (UID: \"dc3c5d26-1a0f-47d6-88f0-963016bdcba6\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wwk29" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.310123 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dc3c5d26-1a0f-47d6-88f0-963016bdcba6-registry-tls\") pod \"image-registry-5d9d95bf5b-wwk29\" (UID: \"dc3c5d26-1a0f-47d6-88f0-963016bdcba6\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wwk29" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.310164 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc3c5d26-1a0f-47d6-88f0-963016bdcba6-trusted-ca\") pod \"image-registry-5d9d95bf5b-wwk29\" (UID: \"dc3c5d26-1a0f-47d6-88f0-963016bdcba6\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wwk29" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.310207 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dc3c5d26-1a0f-47d6-88f0-963016bdcba6-registry-certificates\") pod \"image-registry-5d9d95bf5b-wwk29\" (UID: \"dc3c5d26-1a0f-47d6-88f0-963016bdcba6\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wwk29" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.310234 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dc3c5d26-1a0f-47d6-88f0-963016bdcba6-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-wwk29\" (UID: \"dc3c5d26-1a0f-47d6-88f0-963016bdcba6\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wwk29" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.310256 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dc3c5d26-1a0f-47d6-88f0-963016bdcba6-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-wwk29\" (UID: \"dc3c5d26-1a0f-47d6-88f0-963016bdcba6\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wwk29" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.310871 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dc3c5d26-1a0f-47d6-88f0-963016bdcba6-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-wwk29\" (UID: \"dc3c5d26-1a0f-47d6-88f0-963016bdcba6\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wwk29" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.312558 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dc3c5d26-1a0f-47d6-88f0-963016bdcba6-registry-certificates\") pod \"image-registry-5d9d95bf5b-wwk29\" (UID: \"dc3c5d26-1a0f-47d6-88f0-963016bdcba6\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wwk29" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.312618 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sb62g\" (UniqueName: \"kubernetes.io/projected/dc3c5d26-1a0f-47d6-88f0-963016bdcba6-kube-api-access-sb62g\") pod \"image-registry-5d9d95bf5b-wwk29\" (UID: \"dc3c5d26-1a0f-47d6-88f0-963016bdcba6\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wwk29" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.313131 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc3c5d26-1a0f-47d6-88f0-963016bdcba6-trusted-ca\") pod \"image-registry-5d9d95bf5b-wwk29\" (UID: \"dc3c5d26-1a0f-47d6-88f0-963016bdcba6\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wwk29" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.315382 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dc3c5d26-1a0f-47d6-88f0-963016bdcba6-registry-tls\") pod \"image-registry-5d9d95bf5b-wwk29\" (UID: \"dc3c5d26-1a0f-47d6-88f0-963016bdcba6\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wwk29" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.317817 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dc3c5d26-1a0f-47d6-88f0-963016bdcba6-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-wwk29\" (UID: \"dc3c5d26-1a0f-47d6-88f0-963016bdcba6\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wwk29" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.327249 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb62g\" (UniqueName: \"kubernetes.io/projected/dc3c5d26-1a0f-47d6-88f0-963016bdcba6-kube-api-access-sb62g\") pod \"image-registry-5d9d95bf5b-wwk29\" (UID: \"dc3c5d26-1a0f-47d6-88f0-963016bdcba6\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wwk29" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.327696 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dc3c5d26-1a0f-47d6-88f0-963016bdcba6-bound-sa-token\") pod \"image-registry-5d9d95bf5b-wwk29\" (UID: \"dc3c5d26-1a0f-47d6-88f0-963016bdcba6\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wwk29" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.382374 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nkddm"] Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.394171 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nkddm" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.394536 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nkddm"] Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.415512 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrdzz\" (UniqueName: \"kubernetes.io/projected/a72612c9-0d0f-4051-bf72-b2f47fe2910b-kube-api-access-hrdzz\") pod \"community-operators-nkddm\" (UID: \"a72612c9-0d0f-4051-bf72-b2f47fe2910b\") " pod="openshift-marketplace/community-operators-nkddm" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.415579 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a72612c9-0d0f-4051-bf72-b2f47fe2910b-utilities\") pod \"community-operators-nkddm\" (UID: \"a72612c9-0d0f-4051-bf72-b2f47fe2910b\") " pod="openshift-marketplace/community-operators-nkddm" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.415609 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a72612c9-0d0f-4051-bf72-b2f47fe2910b-catalog-content\") pod \"community-operators-nkddm\" (UID: \"a72612c9-0d0f-4051-bf72-b2f47fe2910b\") " pod="openshift-marketplace/community-operators-nkddm" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.516547 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hrdzz\" (UniqueName: \"kubernetes.io/projected/a72612c9-0d0f-4051-bf72-b2f47fe2910b-kube-api-access-hrdzz\") pod \"community-operators-nkddm\" (UID: \"a72612c9-0d0f-4051-bf72-b2f47fe2910b\") " pod="openshift-marketplace/community-operators-nkddm" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.516588 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-wwk29" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.516602 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a72612c9-0d0f-4051-bf72-b2f47fe2910b-utilities\") pod \"community-operators-nkddm\" (UID: \"a72612c9-0d0f-4051-bf72-b2f47fe2910b\") " pod="openshift-marketplace/community-operators-nkddm" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.516701 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a72612c9-0d0f-4051-bf72-b2f47fe2910b-catalog-content\") pod \"community-operators-nkddm\" (UID: \"a72612c9-0d0f-4051-bf72-b2f47fe2910b\") " pod="openshift-marketplace/community-operators-nkddm" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.517216 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a72612c9-0d0f-4051-bf72-b2f47fe2910b-utilities\") pod \"community-operators-nkddm\" (UID: \"a72612c9-0d0f-4051-bf72-b2f47fe2910b\") " pod="openshift-marketplace/community-operators-nkddm" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.517216 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a72612c9-0d0f-4051-bf72-b2f47fe2910b-catalog-content\") pod \"community-operators-nkddm\" (UID: \"a72612c9-0d0f-4051-bf72-b2f47fe2910b\") " pod="openshift-marketplace/community-operators-nkddm" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.535995 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrdzz\" (UniqueName: \"kubernetes.io/projected/a72612c9-0d0f-4051-bf72-b2f47fe2910b-kube-api-access-hrdzz\") pod \"community-operators-nkddm\" (UID: \"a72612c9-0d0f-4051-bf72-b2f47fe2910b\") " pod="openshift-marketplace/community-operators-nkddm" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.576390 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dc01581-1f70-490d-9fb7-d68483ddbe27" path="/var/lib/kubelet/pods/2dc01581-1f70-490d-9fb7-d68483ddbe27/volumes" Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.732007 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-wwk29"] Dec 10 15:58:28 crc kubenswrapper[5114]: I1210 15:58:28.737171 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nkddm" Dec 10 15:58:29 crc kubenswrapper[5114]: I1210 15:58:29.021247 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nkddm"] Dec 10 15:58:29 crc kubenswrapper[5114]: I1210 15:58:29.137853 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-wwk29" event={"ID":"dc3c5d26-1a0f-47d6-88f0-963016bdcba6","Type":"ContainerStarted","Data":"dcc87bb822160c9dd4add94a15e38f801c7d4e117677e9a19def183e8b44699e"} Dec 10 15:58:29 crc kubenswrapper[5114]: I1210 15:58:29.137902 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-wwk29" event={"ID":"dc3c5d26-1a0f-47d6-88f0-963016bdcba6","Type":"ContainerStarted","Data":"61da80d9bcfefe19fd3a979b4ac8cebc2fc6a3fec2900c9cf12b426a58141a76"} Dec 10 15:58:29 crc kubenswrapper[5114]: I1210 15:58:29.138414 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-wwk29" Dec 10 15:58:29 crc kubenswrapper[5114]: I1210 15:58:29.140545 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nkddm" event={"ID":"a72612c9-0d0f-4051-bf72-b2f47fe2910b","Type":"ContainerStarted","Data":"9ea8e0d2ac02442312066d859ee1d1bf49c1f02351e65b3415930311aac38ce5"} Dec 10 15:58:29 crc kubenswrapper[5114]: I1210 15:58:29.140594 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nkddm" event={"ID":"a72612c9-0d0f-4051-bf72-b2f47fe2910b","Type":"ContainerStarted","Data":"df58a0b93274a0b435202127d5c310db32a79db1065d405a697119c1713e0056"} Dec 10 15:58:29 crc kubenswrapper[5114]: I1210 15:58:29.154898 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-wwk29" podStartSLOduration=1.154878629 podStartE2EDuration="1.154878629s" podCreationTimestamp="2025-12-10 15:58:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:58:29.154269235 +0000 UTC m=+734.875070422" watchObservedRunningTime="2025-12-10 15:58:29.154878629 +0000 UTC m=+734.875679806" Dec 10 15:58:30 crc kubenswrapper[5114]: I1210 15:58:30.146568 5114 generic.go:358] "Generic (PLEG): container finished" podID="a72612c9-0d0f-4051-bf72-b2f47fe2910b" containerID="9ea8e0d2ac02442312066d859ee1d1bf49c1f02351e65b3415930311aac38ce5" exitCode=0 Dec 10 15:58:30 crc kubenswrapper[5114]: I1210 15:58:30.146669 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nkddm" event={"ID":"a72612c9-0d0f-4051-bf72-b2f47fe2910b","Type":"ContainerDied","Data":"9ea8e0d2ac02442312066d859ee1d1bf49c1f02351e65b3415930311aac38ce5"} Dec 10 15:58:32 crc kubenswrapper[5114]: I1210 15:58:32.174839 5114 generic.go:358] "Generic (PLEG): container finished" podID="a72612c9-0d0f-4051-bf72-b2f47fe2910b" containerID="4facd23bc0ea33809089e2b10df1a007436f0c7e4178bc16d7ac09f86b43f6d9" exitCode=0 Dec 10 15:58:32 crc kubenswrapper[5114]: I1210 15:58:32.174964 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nkddm" event={"ID":"a72612c9-0d0f-4051-bf72-b2f47fe2910b","Type":"ContainerDied","Data":"4facd23bc0ea33809089e2b10df1a007436f0c7e4178bc16d7ac09f86b43f6d9"} Dec 10 15:58:33 crc kubenswrapper[5114]: I1210 15:58:33.182022 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nkddm" event={"ID":"a72612c9-0d0f-4051-bf72-b2f47fe2910b","Type":"ContainerStarted","Data":"92e8a0942bdc7ddccdf297ee88fdcddec0f89e7db8e6b54983c5d2b40b9c3d4b"} Dec 10 15:58:33 crc kubenswrapper[5114]: I1210 15:58:33.203955 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nkddm" podStartSLOduration=4.223470362 podStartE2EDuration="5.203938972s" podCreationTimestamp="2025-12-10 15:58:28 +0000 UTC" firstStartedPulling="2025-12-10 15:58:30.147377693 +0000 UTC m=+735.868178870" lastFinishedPulling="2025-12-10 15:58:31.127846313 +0000 UTC m=+736.848647480" observedRunningTime="2025-12-10 15:58:33.201374863 +0000 UTC m=+738.922176060" watchObservedRunningTime="2025-12-10 15:58:33.203938972 +0000 UTC m=+738.924740149" Dec 10 15:58:36 crc kubenswrapper[5114]: I1210 15:58:36.221924 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210nr9m2"] Dec 10 15:58:36 crc kubenswrapper[5114]: I1210 15:58:36.259501 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210nr9m2"] Dec 10 15:58:36 crc kubenswrapper[5114]: I1210 15:58:36.259854 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210nr9m2" Dec 10 15:58:36 crc kubenswrapper[5114]: I1210 15:58:36.262538 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 10 15:58:36 crc kubenswrapper[5114]: I1210 15:58:36.319329 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d13d7913-28b4-489c-9d9c-f55234d8b711-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210nr9m2\" (UID: \"d13d7913-28b4-489c-9d9c-f55234d8b711\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210nr9m2" Dec 10 15:58:36 crc kubenswrapper[5114]: I1210 15:58:36.319375 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d13d7913-28b4-489c-9d9c-f55234d8b711-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210nr9m2\" (UID: \"d13d7913-28b4-489c-9d9c-f55234d8b711\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210nr9m2" Dec 10 15:58:36 crc kubenswrapper[5114]: I1210 15:58:36.319472 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x8jg\" (UniqueName: \"kubernetes.io/projected/d13d7913-28b4-489c-9d9c-f55234d8b711-kube-api-access-7x8jg\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210nr9m2\" (UID: \"d13d7913-28b4-489c-9d9c-f55234d8b711\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210nr9m2" Dec 10 15:58:36 crc kubenswrapper[5114]: I1210 15:58:36.420664 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7x8jg\" (UniqueName: \"kubernetes.io/projected/d13d7913-28b4-489c-9d9c-f55234d8b711-kube-api-access-7x8jg\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210nr9m2\" (UID: \"d13d7913-28b4-489c-9d9c-f55234d8b711\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210nr9m2" Dec 10 15:58:36 crc kubenswrapper[5114]: I1210 15:58:36.420941 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d13d7913-28b4-489c-9d9c-f55234d8b711-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210nr9m2\" (UID: \"d13d7913-28b4-489c-9d9c-f55234d8b711\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210nr9m2" Dec 10 15:58:36 crc kubenswrapper[5114]: I1210 15:58:36.420961 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d13d7913-28b4-489c-9d9c-f55234d8b711-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210nr9m2\" (UID: \"d13d7913-28b4-489c-9d9c-f55234d8b711\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210nr9m2" Dec 10 15:58:36 crc kubenswrapper[5114]: I1210 15:58:36.421556 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d13d7913-28b4-489c-9d9c-f55234d8b711-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210nr9m2\" (UID: \"d13d7913-28b4-489c-9d9c-f55234d8b711\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210nr9m2" Dec 10 15:58:36 crc kubenswrapper[5114]: I1210 15:58:36.421673 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d13d7913-28b4-489c-9d9c-f55234d8b711-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210nr9m2\" (UID: \"d13d7913-28b4-489c-9d9c-f55234d8b711\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210nr9m2" Dec 10 15:58:36 crc kubenswrapper[5114]: I1210 15:58:36.446945 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x8jg\" (UniqueName: \"kubernetes.io/projected/d13d7913-28b4-489c-9d9c-f55234d8b711-kube-api-access-7x8jg\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210nr9m2\" (UID: \"d13d7913-28b4-489c-9d9c-f55234d8b711\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210nr9m2" Dec 10 15:58:36 crc kubenswrapper[5114]: I1210 15:58:36.575298 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210nr9m2" Dec 10 15:58:36 crc kubenswrapper[5114]: I1210 15:58:36.779582 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210nr9m2"] Dec 10 15:58:36 crc kubenswrapper[5114]: W1210 15:58:36.784431 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd13d7913_28b4_489c_9d9c_f55234d8b711.slice/crio-8ace476ac5e660db789707360c91c6d1cbbebb85c4324f6af6a64c6eeaad7cf1 WatchSource:0}: Error finding container 8ace476ac5e660db789707360c91c6d1cbbebb85c4324f6af6a64c6eeaad7cf1: Status 404 returned error can't find the container with id 8ace476ac5e660db789707360c91c6d1cbbebb85c4324f6af6a64c6eeaad7cf1 Dec 10 15:58:37 crc kubenswrapper[5114]: I1210 15:58:37.205439 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210nr9m2" event={"ID":"d13d7913-28b4-489c-9d9c-f55234d8b711","Type":"ContainerStarted","Data":"8ace476ac5e660db789707360c91c6d1cbbebb85c4324f6af6a64c6eeaad7cf1"} Dec 10 15:58:38 crc kubenswrapper[5114]: I1210 15:58:38.212804 5114 generic.go:358] "Generic (PLEG): container finished" podID="d13d7913-28b4-489c-9d9c-f55234d8b711" containerID="96ad9b1c78023c27cd74b44412dec3f4e13e339cf9811cbfd697d47774dd2094" exitCode=0 Dec 10 15:58:38 crc kubenswrapper[5114]: I1210 15:58:38.213011 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210nr9m2" event={"ID":"d13d7913-28b4-489c-9d9c-f55234d8b711","Type":"ContainerDied","Data":"96ad9b1c78023c27cd74b44412dec3f4e13e339cf9811cbfd697d47774dd2094"} Dec 10 15:58:38 crc kubenswrapper[5114]: I1210 15:58:38.738057 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nkddm" Dec 10 15:58:38 crc kubenswrapper[5114]: I1210 15:58:38.738191 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-nkddm" Dec 10 15:58:38 crc kubenswrapper[5114]: I1210 15:58:38.803348 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nkddm" Dec 10 15:58:39 crc kubenswrapper[5114]: I1210 15:58:39.265754 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nkddm" Dec 10 15:58:40 crc kubenswrapper[5114]: I1210 15:58:40.237993 5114 generic.go:358] "Generic (PLEG): container finished" podID="d13d7913-28b4-489c-9d9c-f55234d8b711" containerID="2e6681f69b39341391e4ecb4bd8a0b1be8ea88ad4c5f559a466a6c286ba57151" exitCode=0 Dec 10 15:58:40 crc kubenswrapper[5114]: I1210 15:58:40.238097 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210nr9m2" event={"ID":"d13d7913-28b4-489c-9d9c-f55234d8b711","Type":"ContainerDied","Data":"2e6681f69b39341391e4ecb4bd8a0b1be8ea88ad4c5f559a466a6c286ba57151"} Dec 10 15:58:40 crc kubenswrapper[5114]: I1210 15:58:40.398647 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ed4m8v"] Dec 10 15:58:40 crc kubenswrapper[5114]: I1210 15:58:40.403646 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ed4m8v" Dec 10 15:58:40 crc kubenswrapper[5114]: I1210 15:58:40.407655 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ed4m8v"] Dec 10 15:58:40 crc kubenswrapper[5114]: I1210 15:58:40.483245 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ec07c073-113b-46d7-8d52-b011ca1f8f88-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ed4m8v\" (UID: \"ec07c073-113b-46d7-8d52-b011ca1f8f88\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ed4m8v" Dec 10 15:58:40 crc kubenswrapper[5114]: I1210 15:58:40.483305 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ec07c073-113b-46d7-8d52-b011ca1f8f88-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ed4m8v\" (UID: \"ec07c073-113b-46d7-8d52-b011ca1f8f88\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ed4m8v" Dec 10 15:58:40 crc kubenswrapper[5114]: I1210 15:58:40.483332 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpqqk\" (UniqueName: \"kubernetes.io/projected/ec07c073-113b-46d7-8d52-b011ca1f8f88-kube-api-access-jpqqk\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ed4m8v\" (UID: \"ec07c073-113b-46d7-8d52-b011ca1f8f88\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ed4m8v" Dec 10 15:58:40 crc kubenswrapper[5114]: I1210 15:58:40.584146 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ec07c073-113b-46d7-8d52-b011ca1f8f88-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ed4m8v\" (UID: \"ec07c073-113b-46d7-8d52-b011ca1f8f88\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ed4m8v" Dec 10 15:58:40 crc kubenswrapper[5114]: I1210 15:58:40.584448 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ec07c073-113b-46d7-8d52-b011ca1f8f88-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ed4m8v\" (UID: \"ec07c073-113b-46d7-8d52-b011ca1f8f88\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ed4m8v" Dec 10 15:58:40 crc kubenswrapper[5114]: I1210 15:58:40.584570 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jpqqk\" (UniqueName: \"kubernetes.io/projected/ec07c073-113b-46d7-8d52-b011ca1f8f88-kube-api-access-jpqqk\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ed4m8v\" (UID: \"ec07c073-113b-46d7-8d52-b011ca1f8f88\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ed4m8v" Dec 10 15:58:40 crc kubenswrapper[5114]: I1210 15:58:40.584615 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ec07c073-113b-46d7-8d52-b011ca1f8f88-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ed4m8v\" (UID: \"ec07c073-113b-46d7-8d52-b011ca1f8f88\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ed4m8v" Dec 10 15:58:40 crc kubenswrapper[5114]: I1210 15:58:40.584831 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ec07c073-113b-46d7-8d52-b011ca1f8f88-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ed4m8v\" (UID: \"ec07c073-113b-46d7-8d52-b011ca1f8f88\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ed4m8v" Dec 10 15:58:40 crc kubenswrapper[5114]: I1210 15:58:40.607732 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpqqk\" (UniqueName: \"kubernetes.io/projected/ec07c073-113b-46d7-8d52-b011ca1f8f88-kube-api-access-jpqqk\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ed4m8v\" (UID: \"ec07c073-113b-46d7-8d52-b011ca1f8f88\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ed4m8v" Dec 10 15:58:40 crc kubenswrapper[5114]: I1210 15:58:40.719112 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ed4m8v" Dec 10 15:58:40 crc kubenswrapper[5114]: I1210 15:58:40.908214 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ed4m8v"] Dec 10 15:58:40 crc kubenswrapper[5114]: W1210 15:58:40.914373 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec07c073_113b_46d7_8d52_b011ca1f8f88.slice/crio-a17302b2792857b4f8c9563d53e1eff244b1b9690a36ef49a7d2cbbfd3b63d59 WatchSource:0}: Error finding container a17302b2792857b4f8c9563d53e1eff244b1b9690a36ef49a7d2cbbfd3b63d59: Status 404 returned error can't find the container with id a17302b2792857b4f8c9563d53e1eff244b1b9690a36ef49a7d2cbbfd3b63d59 Dec 10 15:58:40 crc kubenswrapper[5114]: I1210 15:58:40.966171 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dd867"] Dec 10 15:58:40 crc kubenswrapper[5114]: I1210 15:58:40.974403 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dd867" Dec 10 15:58:40 crc kubenswrapper[5114]: I1210 15:58:40.980493 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dd867"] Dec 10 15:58:41 crc kubenswrapper[5114]: I1210 15:58:41.092007 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/086f5b30-8c63-4ee1-8f52-8734702f2afe-catalog-content\") pod \"redhat-operators-dd867\" (UID: \"086f5b30-8c63-4ee1-8f52-8734702f2afe\") " pod="openshift-marketplace/redhat-operators-dd867" Dec 10 15:58:41 crc kubenswrapper[5114]: I1210 15:58:41.092124 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/086f5b30-8c63-4ee1-8f52-8734702f2afe-utilities\") pod \"redhat-operators-dd867\" (UID: \"086f5b30-8c63-4ee1-8f52-8734702f2afe\") " pod="openshift-marketplace/redhat-operators-dd867" Dec 10 15:58:41 crc kubenswrapper[5114]: I1210 15:58:41.092191 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6mwg\" (UniqueName: \"kubernetes.io/projected/086f5b30-8c63-4ee1-8f52-8734702f2afe-kube-api-access-r6mwg\") pod \"redhat-operators-dd867\" (UID: \"086f5b30-8c63-4ee1-8f52-8734702f2afe\") " pod="openshift-marketplace/redhat-operators-dd867" Dec 10 15:58:41 crc kubenswrapper[5114]: I1210 15:58:41.193289 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r6mwg\" (UniqueName: \"kubernetes.io/projected/086f5b30-8c63-4ee1-8f52-8734702f2afe-kube-api-access-r6mwg\") pod \"redhat-operators-dd867\" (UID: \"086f5b30-8c63-4ee1-8f52-8734702f2afe\") " pod="openshift-marketplace/redhat-operators-dd867" Dec 10 15:58:41 crc kubenswrapper[5114]: I1210 15:58:41.193368 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/086f5b30-8c63-4ee1-8f52-8734702f2afe-catalog-content\") pod \"redhat-operators-dd867\" (UID: \"086f5b30-8c63-4ee1-8f52-8734702f2afe\") " pod="openshift-marketplace/redhat-operators-dd867" Dec 10 15:58:41 crc kubenswrapper[5114]: I1210 15:58:41.193405 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/086f5b30-8c63-4ee1-8f52-8734702f2afe-utilities\") pod \"redhat-operators-dd867\" (UID: \"086f5b30-8c63-4ee1-8f52-8734702f2afe\") " pod="openshift-marketplace/redhat-operators-dd867" Dec 10 15:58:41 crc kubenswrapper[5114]: I1210 15:58:41.193948 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/086f5b30-8c63-4ee1-8f52-8734702f2afe-utilities\") pod \"redhat-operators-dd867\" (UID: \"086f5b30-8c63-4ee1-8f52-8734702f2afe\") " pod="openshift-marketplace/redhat-operators-dd867" Dec 10 15:58:41 crc kubenswrapper[5114]: I1210 15:58:41.194031 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/086f5b30-8c63-4ee1-8f52-8734702f2afe-catalog-content\") pod \"redhat-operators-dd867\" (UID: \"086f5b30-8c63-4ee1-8f52-8734702f2afe\") " pod="openshift-marketplace/redhat-operators-dd867" Dec 10 15:58:41 crc kubenswrapper[5114]: I1210 15:58:41.213747 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6mwg\" (UniqueName: \"kubernetes.io/projected/086f5b30-8c63-4ee1-8f52-8734702f2afe-kube-api-access-r6mwg\") pod \"redhat-operators-dd867\" (UID: \"086f5b30-8c63-4ee1-8f52-8734702f2afe\") " pod="openshift-marketplace/redhat-operators-dd867" Dec 10 15:58:41 crc kubenswrapper[5114]: I1210 15:58:41.246521 5114 generic.go:358] "Generic (PLEG): container finished" podID="d13d7913-28b4-489c-9d9c-f55234d8b711" containerID="1a4f90fad97e2ecb85ae26f49fb16ba9a17b9809c1f8cbac84db52545b5ea714" exitCode=0 Dec 10 15:58:41 crc kubenswrapper[5114]: I1210 15:58:41.246588 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210nr9m2" event={"ID":"d13d7913-28b4-489c-9d9c-f55234d8b711","Type":"ContainerDied","Data":"1a4f90fad97e2ecb85ae26f49fb16ba9a17b9809c1f8cbac84db52545b5ea714"} Dec 10 15:58:41 crc kubenswrapper[5114]: I1210 15:58:41.248525 5114 generic.go:358] "Generic (PLEG): container finished" podID="ec07c073-113b-46d7-8d52-b011ca1f8f88" containerID="1f35efe476f033f53629978829bbbae2d2614c67dc8f8d78ab15c14b61e13b3d" exitCode=0 Dec 10 15:58:41 crc kubenswrapper[5114]: I1210 15:58:41.248627 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ed4m8v" event={"ID":"ec07c073-113b-46d7-8d52-b011ca1f8f88","Type":"ContainerDied","Data":"1f35efe476f033f53629978829bbbae2d2614c67dc8f8d78ab15c14b61e13b3d"} Dec 10 15:58:41 crc kubenswrapper[5114]: I1210 15:58:41.248671 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ed4m8v" event={"ID":"ec07c073-113b-46d7-8d52-b011ca1f8f88","Type":"ContainerStarted","Data":"a17302b2792857b4f8c9563d53e1eff244b1b9690a36ef49a7d2cbbfd3b63d59"} Dec 10 15:58:41 crc kubenswrapper[5114]: I1210 15:58:41.326477 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dd867" Dec 10 15:58:41 crc kubenswrapper[5114]: I1210 15:58:41.412352 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fg7jmk"] Dec 10 15:58:41 crc kubenswrapper[5114]: I1210 15:58:41.419714 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fg7jmk" Dec 10 15:58:41 crc kubenswrapper[5114]: I1210 15:58:41.425763 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fg7jmk"] Dec 10 15:58:41 crc kubenswrapper[5114]: I1210 15:58:41.496558 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6e58d423-bc0e-4603-868d-f8ba90b78d9f-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fg7jmk\" (UID: \"6e58d423-bc0e-4603-868d-f8ba90b78d9f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fg7jmk" Dec 10 15:58:41 crc kubenswrapper[5114]: I1210 15:58:41.496606 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6e58d423-bc0e-4603-868d-f8ba90b78d9f-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fg7jmk\" (UID: \"6e58d423-bc0e-4603-868d-f8ba90b78d9f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fg7jmk" Dec 10 15:58:41 crc kubenswrapper[5114]: I1210 15:58:41.496654 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2cb9\" (UniqueName: \"kubernetes.io/projected/6e58d423-bc0e-4603-868d-f8ba90b78d9f-kube-api-access-f2cb9\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fg7jmk\" (UID: \"6e58d423-bc0e-4603-868d-f8ba90b78d9f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fg7jmk" Dec 10 15:58:41 crc kubenswrapper[5114]: I1210 15:58:41.540708 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dd867"] Dec 10 15:58:41 crc kubenswrapper[5114]: I1210 15:58:41.598507 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f2cb9\" (UniqueName: \"kubernetes.io/projected/6e58d423-bc0e-4603-868d-f8ba90b78d9f-kube-api-access-f2cb9\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fg7jmk\" (UID: \"6e58d423-bc0e-4603-868d-f8ba90b78d9f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fg7jmk" Dec 10 15:58:41 crc kubenswrapper[5114]: I1210 15:58:41.598592 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6e58d423-bc0e-4603-868d-f8ba90b78d9f-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fg7jmk\" (UID: \"6e58d423-bc0e-4603-868d-f8ba90b78d9f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fg7jmk" Dec 10 15:58:41 crc kubenswrapper[5114]: I1210 15:58:41.598611 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6e58d423-bc0e-4603-868d-f8ba90b78d9f-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fg7jmk\" (UID: \"6e58d423-bc0e-4603-868d-f8ba90b78d9f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fg7jmk" Dec 10 15:58:41 crc kubenswrapper[5114]: I1210 15:58:41.599584 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6e58d423-bc0e-4603-868d-f8ba90b78d9f-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fg7jmk\" (UID: \"6e58d423-bc0e-4603-868d-f8ba90b78d9f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fg7jmk" Dec 10 15:58:41 crc kubenswrapper[5114]: I1210 15:58:41.599682 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6e58d423-bc0e-4603-868d-f8ba90b78d9f-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fg7jmk\" (UID: \"6e58d423-bc0e-4603-868d-f8ba90b78d9f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fg7jmk" Dec 10 15:58:41 crc kubenswrapper[5114]: I1210 15:58:41.618119 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2cb9\" (UniqueName: \"kubernetes.io/projected/6e58d423-bc0e-4603-868d-f8ba90b78d9f-kube-api-access-f2cb9\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fg7jmk\" (UID: \"6e58d423-bc0e-4603-868d-f8ba90b78d9f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fg7jmk" Dec 10 15:58:41 crc kubenswrapper[5114]: I1210 15:58:41.739238 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fg7jmk" Dec 10 15:58:41 crc kubenswrapper[5114]: I1210 15:58:41.983089 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fg7jmk"] Dec 10 15:58:42 crc kubenswrapper[5114]: I1210 15:58:42.255116 5114 generic.go:358] "Generic (PLEG): container finished" podID="086f5b30-8c63-4ee1-8f52-8734702f2afe" containerID="126a8f1d8d119ddc0150fffe50d520ecce4bd28e1176be3b5fa36869bd60a6a1" exitCode=0 Dec 10 15:58:42 crc kubenswrapper[5114]: I1210 15:58:42.255237 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dd867" event={"ID":"086f5b30-8c63-4ee1-8f52-8734702f2afe","Type":"ContainerDied","Data":"126a8f1d8d119ddc0150fffe50d520ecce4bd28e1176be3b5fa36869bd60a6a1"} Dec 10 15:58:42 crc kubenswrapper[5114]: I1210 15:58:42.255262 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dd867" event={"ID":"086f5b30-8c63-4ee1-8f52-8734702f2afe","Type":"ContainerStarted","Data":"bb2e102193445517653a0690082afab037bfc9aea9d2618b20850bc9c694506e"} Dec 10 15:58:42 crc kubenswrapper[5114]: I1210 15:58:42.256661 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fg7jmk" event={"ID":"6e58d423-bc0e-4603-868d-f8ba90b78d9f","Type":"ContainerStarted","Data":"c06fe48227fd60d6799577905d3097398149578f81b1118f9f3772d8561dbcc4"} Dec 10 15:58:42 crc kubenswrapper[5114]: I1210 15:58:42.446867 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210nr9m2" Dec 10 15:58:42 crc kubenswrapper[5114]: I1210 15:58:42.510195 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x8jg\" (UniqueName: \"kubernetes.io/projected/d13d7913-28b4-489c-9d9c-f55234d8b711-kube-api-access-7x8jg\") pod \"d13d7913-28b4-489c-9d9c-f55234d8b711\" (UID: \"d13d7913-28b4-489c-9d9c-f55234d8b711\") " Dec 10 15:58:42 crc kubenswrapper[5114]: I1210 15:58:42.510341 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d13d7913-28b4-489c-9d9c-f55234d8b711-bundle\") pod \"d13d7913-28b4-489c-9d9c-f55234d8b711\" (UID: \"d13d7913-28b4-489c-9d9c-f55234d8b711\") " Dec 10 15:58:42 crc kubenswrapper[5114]: I1210 15:58:42.510455 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d13d7913-28b4-489c-9d9c-f55234d8b711-util\") pod \"d13d7913-28b4-489c-9d9c-f55234d8b711\" (UID: \"d13d7913-28b4-489c-9d9c-f55234d8b711\") " Dec 10 15:58:42 crc kubenswrapper[5114]: I1210 15:58:42.512043 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d13d7913-28b4-489c-9d9c-f55234d8b711-bundle" (OuterVolumeSpecName: "bundle") pod "d13d7913-28b4-489c-9d9c-f55234d8b711" (UID: "d13d7913-28b4-489c-9d9c-f55234d8b711"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:58:42 crc kubenswrapper[5114]: I1210 15:58:42.515439 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d13d7913-28b4-489c-9d9c-f55234d8b711-kube-api-access-7x8jg" (OuterVolumeSpecName: "kube-api-access-7x8jg") pod "d13d7913-28b4-489c-9d9c-f55234d8b711" (UID: "d13d7913-28b4-489c-9d9c-f55234d8b711"). InnerVolumeSpecName "kube-api-access-7x8jg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:58:42 crc kubenswrapper[5114]: I1210 15:58:42.522259 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d13d7913-28b4-489c-9d9c-f55234d8b711-util" (OuterVolumeSpecName: "util") pod "d13d7913-28b4-489c-9d9c-f55234d8b711" (UID: "d13d7913-28b4-489c-9d9c-f55234d8b711"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:58:42 crc kubenswrapper[5114]: I1210 15:58:42.611691 5114 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d13d7913-28b4-489c-9d9c-f55234d8b711-util\") on node \"crc\" DevicePath \"\"" Dec 10 15:58:42 crc kubenswrapper[5114]: I1210 15:58:42.611722 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7x8jg\" (UniqueName: \"kubernetes.io/projected/d13d7913-28b4-489c-9d9c-f55234d8b711-kube-api-access-7x8jg\") on node \"crc\" DevicePath \"\"" Dec 10 15:58:42 crc kubenswrapper[5114]: I1210 15:58:42.611732 5114 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d13d7913-28b4-489c-9d9c-f55234d8b711-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:58:43 crc kubenswrapper[5114]: I1210 15:58:43.263392 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210nr9m2" event={"ID":"d13d7913-28b4-489c-9d9c-f55234d8b711","Type":"ContainerDied","Data":"8ace476ac5e660db789707360c91c6d1cbbebb85c4324f6af6a64c6eeaad7cf1"} Dec 10 15:58:43 crc kubenswrapper[5114]: I1210 15:58:43.263691 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ace476ac5e660db789707360c91c6d1cbbebb85c4324f6af6a64c6eeaad7cf1" Dec 10 15:58:43 crc kubenswrapper[5114]: I1210 15:58:43.263449 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210nr9m2" Dec 10 15:58:43 crc kubenswrapper[5114]: I1210 15:58:43.265582 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dd867" event={"ID":"086f5b30-8c63-4ee1-8f52-8734702f2afe","Type":"ContainerStarted","Data":"835fb59fd7b7e19702b283247097ad0b7dc1bfc05e5a05517cf1099ecc06a8e8"} Dec 10 15:58:43 crc kubenswrapper[5114]: I1210 15:58:43.267084 5114 generic.go:358] "Generic (PLEG): container finished" podID="ec07c073-113b-46d7-8d52-b011ca1f8f88" containerID="e4ce3dc387ea1746f30a1d8359fd046ffb31ca49ed6869d95c51f46f9013dcc9" exitCode=0 Dec 10 15:58:43 crc kubenswrapper[5114]: I1210 15:58:43.267193 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ed4m8v" event={"ID":"ec07c073-113b-46d7-8d52-b011ca1f8f88","Type":"ContainerDied","Data":"e4ce3dc387ea1746f30a1d8359fd046ffb31ca49ed6869d95c51f46f9013dcc9"} Dec 10 15:58:43 crc kubenswrapper[5114]: I1210 15:58:43.268776 5114 generic.go:358] "Generic (PLEG): container finished" podID="6e58d423-bc0e-4603-868d-f8ba90b78d9f" containerID="6effc3bd7a3f66585f27c54f28711a2f6d2d68e9e607d14f8810528f23bcf714" exitCode=0 Dec 10 15:58:43 crc kubenswrapper[5114]: I1210 15:58:43.268832 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fg7jmk" event={"ID":"6e58d423-bc0e-4603-868d-f8ba90b78d9f","Type":"ContainerDied","Data":"6effc3bd7a3f66585f27c54f28711a2f6d2d68e9e607d14f8810528f23bcf714"} Dec 10 15:58:43 crc kubenswrapper[5114]: I1210 15:58:43.958945 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nkddm"] Dec 10 15:58:43 crc kubenswrapper[5114]: I1210 15:58:43.959360 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nkddm" podUID="a72612c9-0d0f-4051-bf72-b2f47fe2910b" containerName="registry-server" containerID="cri-o://92e8a0942bdc7ddccdf297ee88fdcddec0f89e7db8e6b54983c5d2b40b9c3d4b" gracePeriod=2 Dec 10 15:58:44 crc kubenswrapper[5114]: I1210 15:58:44.278964 5114 generic.go:358] "Generic (PLEG): container finished" podID="a72612c9-0d0f-4051-bf72-b2f47fe2910b" containerID="92e8a0942bdc7ddccdf297ee88fdcddec0f89e7db8e6b54983c5d2b40b9c3d4b" exitCode=0 Dec 10 15:58:44 crc kubenswrapper[5114]: I1210 15:58:44.279049 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nkddm" event={"ID":"a72612c9-0d0f-4051-bf72-b2f47fe2910b","Type":"ContainerDied","Data":"92e8a0942bdc7ddccdf297ee88fdcddec0f89e7db8e6b54983c5d2b40b9c3d4b"} Dec 10 15:58:44 crc kubenswrapper[5114]: I1210 15:58:44.279116 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nkddm" event={"ID":"a72612c9-0d0f-4051-bf72-b2f47fe2910b","Type":"ContainerDied","Data":"df58a0b93274a0b435202127d5c310db32a79db1065d405a697119c1713e0056"} Dec 10 15:58:44 crc kubenswrapper[5114]: I1210 15:58:44.279129 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df58a0b93274a0b435202127d5c310db32a79db1065d405a697119c1713e0056" Dec 10 15:58:44 crc kubenswrapper[5114]: I1210 15:58:44.281103 5114 generic.go:358] "Generic (PLEG): container finished" podID="086f5b30-8c63-4ee1-8f52-8734702f2afe" containerID="835fb59fd7b7e19702b283247097ad0b7dc1bfc05e5a05517cf1099ecc06a8e8" exitCode=0 Dec 10 15:58:44 crc kubenswrapper[5114]: I1210 15:58:44.281217 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dd867" event={"ID":"086f5b30-8c63-4ee1-8f52-8734702f2afe","Type":"ContainerDied","Data":"835fb59fd7b7e19702b283247097ad0b7dc1bfc05e5a05517cf1099ecc06a8e8"} Dec 10 15:58:44 crc kubenswrapper[5114]: I1210 15:58:44.284398 5114 generic.go:358] "Generic (PLEG): container finished" podID="ec07c073-113b-46d7-8d52-b011ca1f8f88" containerID="811ba4e8b68a9f568f7165989e684929ad3170c73313b1efb39debe176289214" exitCode=0 Dec 10 15:58:44 crc kubenswrapper[5114]: I1210 15:58:44.284747 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ed4m8v" event={"ID":"ec07c073-113b-46d7-8d52-b011ca1f8f88","Type":"ContainerDied","Data":"811ba4e8b68a9f568f7165989e684929ad3170c73313b1efb39debe176289214"} Dec 10 15:58:44 crc kubenswrapper[5114]: I1210 15:58:44.315912 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nkddm" Dec 10 15:58:44 crc kubenswrapper[5114]: I1210 15:58:44.437790 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a72612c9-0d0f-4051-bf72-b2f47fe2910b-catalog-content\") pod \"a72612c9-0d0f-4051-bf72-b2f47fe2910b\" (UID: \"a72612c9-0d0f-4051-bf72-b2f47fe2910b\") " Dec 10 15:58:44 crc kubenswrapper[5114]: I1210 15:58:44.437868 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a72612c9-0d0f-4051-bf72-b2f47fe2910b-utilities\") pod \"a72612c9-0d0f-4051-bf72-b2f47fe2910b\" (UID: \"a72612c9-0d0f-4051-bf72-b2f47fe2910b\") " Dec 10 15:58:44 crc kubenswrapper[5114]: I1210 15:58:44.437946 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrdzz\" (UniqueName: \"kubernetes.io/projected/a72612c9-0d0f-4051-bf72-b2f47fe2910b-kube-api-access-hrdzz\") pod \"a72612c9-0d0f-4051-bf72-b2f47fe2910b\" (UID: \"a72612c9-0d0f-4051-bf72-b2f47fe2910b\") " Dec 10 15:58:44 crc kubenswrapper[5114]: I1210 15:58:44.438996 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a72612c9-0d0f-4051-bf72-b2f47fe2910b-utilities" (OuterVolumeSpecName: "utilities") pod "a72612c9-0d0f-4051-bf72-b2f47fe2910b" (UID: "a72612c9-0d0f-4051-bf72-b2f47fe2910b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:58:44 crc kubenswrapper[5114]: I1210 15:58:44.447857 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a72612c9-0d0f-4051-bf72-b2f47fe2910b-kube-api-access-hrdzz" (OuterVolumeSpecName: "kube-api-access-hrdzz") pod "a72612c9-0d0f-4051-bf72-b2f47fe2910b" (UID: "a72612c9-0d0f-4051-bf72-b2f47fe2910b"). InnerVolumeSpecName "kube-api-access-hrdzz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:58:44 crc kubenswrapper[5114]: I1210 15:58:44.500309 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a72612c9-0d0f-4051-bf72-b2f47fe2910b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a72612c9-0d0f-4051-bf72-b2f47fe2910b" (UID: "a72612c9-0d0f-4051-bf72-b2f47fe2910b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:58:44 crc kubenswrapper[5114]: I1210 15:58:44.540877 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a72612c9-0d0f-4051-bf72-b2f47fe2910b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:58:44 crc kubenswrapper[5114]: I1210 15:58:44.540922 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a72612c9-0d0f-4051-bf72-b2f47fe2910b-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:58:44 crc kubenswrapper[5114]: I1210 15:58:44.540935 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hrdzz\" (UniqueName: \"kubernetes.io/projected/a72612c9-0d0f-4051-bf72-b2f47fe2910b-kube-api-access-hrdzz\") on node \"crc\" DevicePath \"\"" Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.290708 5114 generic.go:358] "Generic (PLEG): container finished" podID="6e58d423-bc0e-4603-868d-f8ba90b78d9f" containerID="07ea2475cee2b0525701b0fe62f6f26f91c770827f27f914e24950f9f1bbaca8" exitCode=0 Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.290829 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fg7jmk" event={"ID":"6e58d423-bc0e-4603-868d-f8ba90b78d9f","Type":"ContainerDied","Data":"07ea2475cee2b0525701b0fe62f6f26f91c770827f27f914e24950f9f1bbaca8"} Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.294893 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dd867" event={"ID":"086f5b30-8c63-4ee1-8f52-8734702f2afe","Type":"ContainerStarted","Data":"4eda9f1726d80f3550f204dbcd27559dcd08f9e30c458660ed80b70a06b96a89"} Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.295060 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nkddm" Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.330815 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nkddm"] Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.335633 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nkddm"] Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.346711 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dd867" podStartSLOduration=4.630695847 podStartE2EDuration="5.346692658s" podCreationTimestamp="2025-12-10 15:58:40 +0000 UTC" firstStartedPulling="2025-12-10 15:58:42.256127887 +0000 UTC m=+747.976929084" lastFinishedPulling="2025-12-10 15:58:42.972124698 +0000 UTC m=+748.692925895" observedRunningTime="2025-12-10 15:58:45.341621953 +0000 UTC m=+751.062423140" watchObservedRunningTime="2025-12-10 15:58:45.346692658 +0000 UTC m=+751.067493835" Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.607535 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am8g9d"] Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.608081 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d13d7913-28b4-489c-9d9c-f55234d8b711" containerName="extract" Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.608098 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="d13d7913-28b4-489c-9d9c-f55234d8b711" containerName="extract" Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.608116 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a72612c9-0d0f-4051-bf72-b2f47fe2910b" containerName="extract-content" Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.608121 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="a72612c9-0d0f-4051-bf72-b2f47fe2910b" containerName="extract-content" Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.608129 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d13d7913-28b4-489c-9d9c-f55234d8b711" containerName="pull" Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.608134 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="d13d7913-28b4-489c-9d9c-f55234d8b711" containerName="pull" Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.608145 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d13d7913-28b4-489c-9d9c-f55234d8b711" containerName="util" Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.608150 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="d13d7913-28b4-489c-9d9c-f55234d8b711" containerName="util" Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.608160 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a72612c9-0d0f-4051-bf72-b2f47fe2910b" containerName="extract-utilities" Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.608165 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="a72612c9-0d0f-4051-bf72-b2f47fe2910b" containerName="extract-utilities" Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.608175 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a72612c9-0d0f-4051-bf72-b2f47fe2910b" containerName="registry-server" Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.608181 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="a72612c9-0d0f-4051-bf72-b2f47fe2910b" containerName="registry-server" Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.608263 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="d13d7913-28b4-489c-9d9c-f55234d8b711" containerName="extract" Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.608293 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="a72612c9-0d0f-4051-bf72-b2f47fe2910b" containerName="registry-server" Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.611952 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am8g9d" Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.614019 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ed4m8v" Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.628470 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am8g9d"] Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.755536 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpqqk\" (UniqueName: \"kubernetes.io/projected/ec07c073-113b-46d7-8d52-b011ca1f8f88-kube-api-access-jpqqk\") pod \"ec07c073-113b-46d7-8d52-b011ca1f8f88\" (UID: \"ec07c073-113b-46d7-8d52-b011ca1f8f88\") " Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.755621 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ec07c073-113b-46d7-8d52-b011ca1f8f88-util\") pod \"ec07c073-113b-46d7-8d52-b011ca1f8f88\" (UID: \"ec07c073-113b-46d7-8d52-b011ca1f8f88\") " Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.756448 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ec07c073-113b-46d7-8d52-b011ca1f8f88-bundle\") pod \"ec07c073-113b-46d7-8d52-b011ca1f8f88\" (UID: \"ec07c073-113b-46d7-8d52-b011ca1f8f88\") " Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.756781 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlkql\" (UniqueName: \"kubernetes.io/projected/8103a067-d904-4355-93ee-d0d0d91dc987-kube-api-access-vlkql\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am8g9d\" (UID: \"8103a067-d904-4355-93ee-d0d0d91dc987\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am8g9d" Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.756869 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8103a067-d904-4355-93ee-d0d0d91dc987-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am8g9d\" (UID: \"8103a067-d904-4355-93ee-d0d0d91dc987\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am8g9d" Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.756972 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8103a067-d904-4355-93ee-d0d0d91dc987-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am8g9d\" (UID: \"8103a067-d904-4355-93ee-d0d0d91dc987\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am8g9d" Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.757834 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec07c073-113b-46d7-8d52-b011ca1f8f88-bundle" (OuterVolumeSpecName: "bundle") pod "ec07c073-113b-46d7-8d52-b011ca1f8f88" (UID: "ec07c073-113b-46d7-8d52-b011ca1f8f88"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.763917 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec07c073-113b-46d7-8d52-b011ca1f8f88-kube-api-access-jpqqk" (OuterVolumeSpecName: "kube-api-access-jpqqk") pod "ec07c073-113b-46d7-8d52-b011ca1f8f88" (UID: "ec07c073-113b-46d7-8d52-b011ca1f8f88"). InnerVolumeSpecName "kube-api-access-jpqqk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.769270 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec07c073-113b-46d7-8d52-b011ca1f8f88-util" (OuterVolumeSpecName: "util") pod "ec07c073-113b-46d7-8d52-b011ca1f8f88" (UID: "ec07c073-113b-46d7-8d52-b011ca1f8f88"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.858260 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vlkql\" (UniqueName: \"kubernetes.io/projected/8103a067-d904-4355-93ee-d0d0d91dc987-kube-api-access-vlkql\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am8g9d\" (UID: \"8103a067-d904-4355-93ee-d0d0d91dc987\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am8g9d" Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.858361 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8103a067-d904-4355-93ee-d0d0d91dc987-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am8g9d\" (UID: \"8103a067-d904-4355-93ee-d0d0d91dc987\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am8g9d" Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.858415 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8103a067-d904-4355-93ee-d0d0d91dc987-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am8g9d\" (UID: \"8103a067-d904-4355-93ee-d0d0d91dc987\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am8g9d" Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.858462 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jpqqk\" (UniqueName: \"kubernetes.io/projected/ec07c073-113b-46d7-8d52-b011ca1f8f88-kube-api-access-jpqqk\") on node \"crc\" DevicePath \"\"" Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.858474 5114 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ec07c073-113b-46d7-8d52-b011ca1f8f88-util\") on node \"crc\" DevicePath \"\"" Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.858483 5114 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ec07c073-113b-46d7-8d52-b011ca1f8f88-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.858997 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8103a067-d904-4355-93ee-d0d0d91dc987-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am8g9d\" (UID: \"8103a067-d904-4355-93ee-d0d0d91dc987\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am8g9d" Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.859332 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8103a067-d904-4355-93ee-d0d0d91dc987-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am8g9d\" (UID: \"8103a067-d904-4355-93ee-d0d0d91dc987\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am8g9d" Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.878529 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlkql\" (UniqueName: \"kubernetes.io/projected/8103a067-d904-4355-93ee-d0d0d91dc987-kube-api-access-vlkql\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am8g9d\" (UID: \"8103a067-d904-4355-93ee-d0d0d91dc987\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am8g9d" Dec 10 15:58:45 crc kubenswrapper[5114]: I1210 15:58:45.926541 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am8g9d" Dec 10 15:58:46 crc kubenswrapper[5114]: I1210 15:58:46.138958 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am8g9d"] Dec 10 15:58:46 crc kubenswrapper[5114]: I1210 15:58:46.164888 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dv9k2"] Dec 10 15:58:46 crc kubenswrapper[5114]: I1210 15:58:46.165678 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ec07c073-113b-46d7-8d52-b011ca1f8f88" containerName="util" Dec 10 15:58:46 crc kubenswrapper[5114]: I1210 15:58:46.165693 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec07c073-113b-46d7-8d52-b011ca1f8f88" containerName="util" Dec 10 15:58:46 crc kubenswrapper[5114]: I1210 15:58:46.165699 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ec07c073-113b-46d7-8d52-b011ca1f8f88" containerName="extract" Dec 10 15:58:46 crc kubenswrapper[5114]: I1210 15:58:46.165705 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec07c073-113b-46d7-8d52-b011ca1f8f88" containerName="extract" Dec 10 15:58:46 crc kubenswrapper[5114]: I1210 15:58:46.165726 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ec07c073-113b-46d7-8d52-b011ca1f8f88" containerName="pull" Dec 10 15:58:46 crc kubenswrapper[5114]: I1210 15:58:46.165732 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec07c073-113b-46d7-8d52-b011ca1f8f88" containerName="pull" Dec 10 15:58:46 crc kubenswrapper[5114]: I1210 15:58:46.165838 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="ec07c073-113b-46d7-8d52-b011ca1f8f88" containerName="extract" Dec 10 15:58:46 crc kubenswrapper[5114]: I1210 15:58:46.177778 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dv9k2"] Dec 10 15:58:46 crc kubenswrapper[5114]: I1210 15:58:46.177922 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dv9k2" Dec 10 15:58:46 crc kubenswrapper[5114]: I1210 15:58:46.263259 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6274885-0329-40d3-bfc5-b1dcb367b221-utilities\") pod \"certified-operators-dv9k2\" (UID: \"a6274885-0329-40d3-bfc5-b1dcb367b221\") " pod="openshift-marketplace/certified-operators-dv9k2" Dec 10 15:58:46 crc kubenswrapper[5114]: I1210 15:58:46.263361 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zstdg\" (UniqueName: \"kubernetes.io/projected/a6274885-0329-40d3-bfc5-b1dcb367b221-kube-api-access-zstdg\") pod \"certified-operators-dv9k2\" (UID: \"a6274885-0329-40d3-bfc5-b1dcb367b221\") " pod="openshift-marketplace/certified-operators-dv9k2" Dec 10 15:58:46 crc kubenswrapper[5114]: I1210 15:58:46.263443 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6274885-0329-40d3-bfc5-b1dcb367b221-catalog-content\") pod \"certified-operators-dv9k2\" (UID: \"a6274885-0329-40d3-bfc5-b1dcb367b221\") " pod="openshift-marketplace/certified-operators-dv9k2" Dec 10 15:58:46 crc kubenswrapper[5114]: I1210 15:58:46.301675 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ed4m8v" Dec 10 15:58:46 crc kubenswrapper[5114]: I1210 15:58:46.301676 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ed4m8v" event={"ID":"ec07c073-113b-46d7-8d52-b011ca1f8f88","Type":"ContainerDied","Data":"a17302b2792857b4f8c9563d53e1eff244b1b9690a36ef49a7d2cbbfd3b63d59"} Dec 10 15:58:46 crc kubenswrapper[5114]: I1210 15:58:46.301747 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a17302b2792857b4f8c9563d53e1eff244b1b9690a36ef49a7d2cbbfd3b63d59" Dec 10 15:58:46 crc kubenswrapper[5114]: I1210 15:58:46.303095 5114 generic.go:358] "Generic (PLEG): container finished" podID="6e58d423-bc0e-4603-868d-f8ba90b78d9f" containerID="3574245eb3a67f448c58e4f2e4aec48fd8161a1da4596db3c21f5a3e69957679" exitCode=0 Dec 10 15:58:46 crc kubenswrapper[5114]: I1210 15:58:46.303167 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fg7jmk" event={"ID":"6e58d423-bc0e-4603-868d-f8ba90b78d9f","Type":"ContainerDied","Data":"3574245eb3a67f448c58e4f2e4aec48fd8161a1da4596db3c21f5a3e69957679"} Dec 10 15:58:46 crc kubenswrapper[5114]: I1210 15:58:46.305180 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am8g9d" event={"ID":"8103a067-d904-4355-93ee-d0d0d91dc987","Type":"ContainerStarted","Data":"c90f00320ce7fd359c777accd4f7f216fc34a6cb46c29c9fb6dcf9bfa799f091"} Dec 10 15:58:46 crc kubenswrapper[5114]: I1210 15:58:46.365038 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6274885-0329-40d3-bfc5-b1dcb367b221-catalog-content\") pod \"certified-operators-dv9k2\" (UID: \"a6274885-0329-40d3-bfc5-b1dcb367b221\") " pod="openshift-marketplace/certified-operators-dv9k2" Dec 10 15:58:46 crc kubenswrapper[5114]: I1210 15:58:46.365137 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6274885-0329-40d3-bfc5-b1dcb367b221-utilities\") pod \"certified-operators-dv9k2\" (UID: \"a6274885-0329-40d3-bfc5-b1dcb367b221\") " pod="openshift-marketplace/certified-operators-dv9k2" Dec 10 15:58:46 crc kubenswrapper[5114]: I1210 15:58:46.365169 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zstdg\" (UniqueName: \"kubernetes.io/projected/a6274885-0329-40d3-bfc5-b1dcb367b221-kube-api-access-zstdg\") pod \"certified-operators-dv9k2\" (UID: \"a6274885-0329-40d3-bfc5-b1dcb367b221\") " pod="openshift-marketplace/certified-operators-dv9k2" Dec 10 15:58:46 crc kubenswrapper[5114]: I1210 15:58:46.365689 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6274885-0329-40d3-bfc5-b1dcb367b221-catalog-content\") pod \"certified-operators-dv9k2\" (UID: \"a6274885-0329-40d3-bfc5-b1dcb367b221\") " pod="openshift-marketplace/certified-operators-dv9k2" Dec 10 15:58:46 crc kubenswrapper[5114]: I1210 15:58:46.365730 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6274885-0329-40d3-bfc5-b1dcb367b221-utilities\") pod \"certified-operators-dv9k2\" (UID: \"a6274885-0329-40d3-bfc5-b1dcb367b221\") " pod="openshift-marketplace/certified-operators-dv9k2" Dec 10 15:58:46 crc kubenswrapper[5114]: I1210 15:58:46.385337 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zstdg\" (UniqueName: \"kubernetes.io/projected/a6274885-0329-40d3-bfc5-b1dcb367b221-kube-api-access-zstdg\") pod \"certified-operators-dv9k2\" (UID: \"a6274885-0329-40d3-bfc5-b1dcb367b221\") " pod="openshift-marketplace/certified-operators-dv9k2" Dec 10 15:58:46 crc kubenswrapper[5114]: I1210 15:58:46.494678 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dv9k2" Dec 10 15:58:46 crc kubenswrapper[5114]: I1210 15:58:46.576091 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a72612c9-0d0f-4051-bf72-b2f47fe2910b" path="/var/lib/kubelet/pods/a72612c9-0d0f-4051-bf72-b2f47fe2910b/volumes" Dec 10 15:58:46 crc kubenswrapper[5114]: I1210 15:58:46.728538 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dv9k2"] Dec 10 15:58:47 crc kubenswrapper[5114]: I1210 15:58:47.311306 5114 generic.go:358] "Generic (PLEG): container finished" podID="a6274885-0329-40d3-bfc5-b1dcb367b221" containerID="71f969d571faab3c6eed33a0dc91cb1fc24941e701ee38950dc3d8d3a72ee4c3" exitCode=0 Dec 10 15:58:47 crc kubenswrapper[5114]: I1210 15:58:47.311405 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dv9k2" event={"ID":"a6274885-0329-40d3-bfc5-b1dcb367b221","Type":"ContainerDied","Data":"71f969d571faab3c6eed33a0dc91cb1fc24941e701ee38950dc3d8d3a72ee4c3"} Dec 10 15:58:47 crc kubenswrapper[5114]: I1210 15:58:47.311745 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dv9k2" event={"ID":"a6274885-0329-40d3-bfc5-b1dcb367b221","Type":"ContainerStarted","Data":"0480b83c38d50c7bb04de685598242f01b871dbd2d90040eec08d0eb75cb7b84"} Dec 10 15:58:47 crc kubenswrapper[5114]: I1210 15:58:47.313133 5114 generic.go:358] "Generic (PLEG): container finished" podID="8103a067-d904-4355-93ee-d0d0d91dc987" containerID="f945b028637f8137a7040d1c371e322ecd51593e0106b347a3a7d1d99c334b31" exitCode=0 Dec 10 15:58:47 crc kubenswrapper[5114]: I1210 15:58:47.313176 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am8g9d" event={"ID":"8103a067-d904-4355-93ee-d0d0d91dc987","Type":"ContainerDied","Data":"f945b028637f8137a7040d1c371e322ecd51593e0106b347a3a7d1d99c334b31"} Dec 10 15:58:47 crc kubenswrapper[5114]: I1210 15:58:47.652780 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fg7jmk" Dec 10 15:58:47 crc kubenswrapper[5114]: I1210 15:58:47.783864 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6e58d423-bc0e-4603-868d-f8ba90b78d9f-util\") pod \"6e58d423-bc0e-4603-868d-f8ba90b78d9f\" (UID: \"6e58d423-bc0e-4603-868d-f8ba90b78d9f\") " Dec 10 15:58:47 crc kubenswrapper[5114]: I1210 15:58:47.784245 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2cb9\" (UniqueName: \"kubernetes.io/projected/6e58d423-bc0e-4603-868d-f8ba90b78d9f-kube-api-access-f2cb9\") pod \"6e58d423-bc0e-4603-868d-f8ba90b78d9f\" (UID: \"6e58d423-bc0e-4603-868d-f8ba90b78d9f\") " Dec 10 15:58:47 crc kubenswrapper[5114]: I1210 15:58:47.784998 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e58d423-bc0e-4603-868d-f8ba90b78d9f-bundle" (OuterVolumeSpecName: "bundle") pod "6e58d423-bc0e-4603-868d-f8ba90b78d9f" (UID: "6e58d423-bc0e-4603-868d-f8ba90b78d9f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:58:47 crc kubenswrapper[5114]: I1210 15:58:47.784508 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6e58d423-bc0e-4603-868d-f8ba90b78d9f-bundle\") pod \"6e58d423-bc0e-4603-868d-f8ba90b78d9f\" (UID: \"6e58d423-bc0e-4603-868d-f8ba90b78d9f\") " Dec 10 15:58:47 crc kubenswrapper[5114]: I1210 15:58:47.785969 5114 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6e58d423-bc0e-4603-868d-f8ba90b78d9f-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:58:47 crc kubenswrapper[5114]: I1210 15:58:47.801987 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e58d423-bc0e-4603-868d-f8ba90b78d9f-kube-api-access-f2cb9" (OuterVolumeSpecName: "kube-api-access-f2cb9") pod "6e58d423-bc0e-4603-868d-f8ba90b78d9f" (UID: "6e58d423-bc0e-4603-868d-f8ba90b78d9f"). InnerVolumeSpecName "kube-api-access-f2cb9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:58:47 crc kubenswrapper[5114]: I1210 15:58:47.887881 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f2cb9\" (UniqueName: \"kubernetes.io/projected/6e58d423-bc0e-4603-868d-f8ba90b78d9f-kube-api-access-f2cb9\") on node \"crc\" DevicePath \"\"" Dec 10 15:58:47 crc kubenswrapper[5114]: I1210 15:58:47.942461 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e58d423-bc0e-4603-868d-f8ba90b78d9f-util" (OuterVolumeSpecName: "util") pod "6e58d423-bc0e-4603-868d-f8ba90b78d9f" (UID: "6e58d423-bc0e-4603-868d-f8ba90b78d9f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:58:47 crc kubenswrapper[5114]: I1210 15:58:47.989796 5114 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6e58d423-bc0e-4603-868d-f8ba90b78d9f-util\") on node \"crc\" DevicePath \"\"" Dec 10 15:58:48 crc kubenswrapper[5114]: I1210 15:58:48.321424 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fg7jmk" event={"ID":"6e58d423-bc0e-4603-868d-f8ba90b78d9f","Type":"ContainerDied","Data":"c06fe48227fd60d6799577905d3097398149578f81b1118f9f3772d8561dbcc4"} Dec 10 15:58:48 crc kubenswrapper[5114]: I1210 15:58:48.321465 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c06fe48227fd60d6799577905d3097398149578f81b1118f9f3772d8561dbcc4" Dec 10 15:58:48 crc kubenswrapper[5114]: I1210 15:58:48.321433 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fg7jmk" Dec 10 15:58:49 crc kubenswrapper[5114]: I1210 15:58:49.350152 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dv9k2" event={"ID":"a6274885-0329-40d3-bfc5-b1dcb367b221","Type":"ContainerStarted","Data":"b6ef452f45f421c34eed8f55a0f8f12e38af9794e0dcacb1ac75e2e38c998b82"} Dec 10 15:58:50 crc kubenswrapper[5114]: I1210 15:58:50.158369 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-wwk29" Dec 10 15:58:50 crc kubenswrapper[5114]: I1210 15:58:50.350069 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-2tbm6"] Dec 10 15:58:51 crc kubenswrapper[5114]: I1210 15:58:51.326981 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dd867" Dec 10 15:58:51 crc kubenswrapper[5114]: I1210 15:58:51.327042 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-dd867" Dec 10 15:58:51 crc kubenswrapper[5114]: I1210 15:58:51.468156 5114 generic.go:358] "Generic (PLEG): container finished" podID="a6274885-0329-40d3-bfc5-b1dcb367b221" containerID="b6ef452f45f421c34eed8f55a0f8f12e38af9794e0dcacb1ac75e2e38c998b82" exitCode=0 Dec 10 15:58:51 crc kubenswrapper[5114]: I1210 15:58:51.468243 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dv9k2" event={"ID":"a6274885-0329-40d3-bfc5-b1dcb367b221","Type":"ContainerDied","Data":"b6ef452f45f421c34eed8f55a0f8f12e38af9794e0dcacb1ac75e2e38c998b82"} Dec 10 15:58:51 crc kubenswrapper[5114]: I1210 15:58:51.573819 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-8bxt2"] Dec 10 15:58:51 crc kubenswrapper[5114]: I1210 15:58:51.574580 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6e58d423-bc0e-4603-868d-f8ba90b78d9f" containerName="extract" Dec 10 15:58:51 crc kubenswrapper[5114]: I1210 15:58:51.574608 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e58d423-bc0e-4603-868d-f8ba90b78d9f" containerName="extract" Dec 10 15:58:51 crc kubenswrapper[5114]: I1210 15:58:51.574625 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6e58d423-bc0e-4603-868d-f8ba90b78d9f" containerName="pull" Dec 10 15:58:51 crc kubenswrapper[5114]: I1210 15:58:51.574632 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e58d423-bc0e-4603-868d-f8ba90b78d9f" containerName="pull" Dec 10 15:58:51 crc kubenswrapper[5114]: I1210 15:58:51.574659 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6e58d423-bc0e-4603-868d-f8ba90b78d9f" containerName="util" Dec 10 15:58:51 crc kubenswrapper[5114]: I1210 15:58:51.574666 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e58d423-bc0e-4603-868d-f8ba90b78d9f" containerName="util" Dec 10 15:58:51 crc kubenswrapper[5114]: I1210 15:58:51.574847 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="6e58d423-bc0e-4603-868d-f8ba90b78d9f" containerName="extract" Dec 10 15:58:51 crc kubenswrapper[5114]: I1210 15:58:51.593809 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-8bxt2"] Dec 10 15:58:51 crc kubenswrapper[5114]: I1210 15:58:51.593947 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-8bxt2" Dec 10 15:58:51 crc kubenswrapper[5114]: I1210 15:58:51.616696 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Dec 10 15:58:51 crc kubenswrapper[5114]: I1210 15:58:51.617071 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-fgf52\"" Dec 10 15:58:51 crc kubenswrapper[5114]: I1210 15:58:51.617284 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Dec 10 15:58:51 crc kubenswrapper[5114]: I1210 15:58:51.689002 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f89cn\" (UniqueName: \"kubernetes.io/projected/3482f330-553d-46bb-890c-c4bef1677c86-kube-api-access-f89cn\") pod \"obo-prometheus-operator-86648f486b-8bxt2\" (UID: \"3482f330-553d-46bb-890c-c4bef1677c86\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-8bxt2" Dec 10 15:58:51 crc kubenswrapper[5114]: I1210 15:58:51.752361 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7fbc7766cd-7whss"] Dec 10 15:58:51 crc kubenswrapper[5114]: I1210 15:58:51.764963 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fbc7766cd-7whss" Dec 10 15:58:51 crc kubenswrapper[5114]: I1210 15:58:51.767520 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-dtgqc\"" Dec 10 15:58:51 crc kubenswrapper[5114]: I1210 15:58:51.767751 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Dec 10 15:58:51 crc kubenswrapper[5114]: I1210 15:58:51.768618 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7fbc7766cd-7whss"] Dec 10 15:58:51 crc kubenswrapper[5114]: I1210 15:58:51.775095 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7fbc7766cd-kbbs9"] Dec 10 15:58:51 crc kubenswrapper[5114]: I1210 15:58:51.790943 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f89cn\" (UniqueName: \"kubernetes.io/projected/3482f330-553d-46bb-890c-c4bef1677c86-kube-api-access-f89cn\") pod \"obo-prometheus-operator-86648f486b-8bxt2\" (UID: \"3482f330-553d-46bb-890c-c4bef1677c86\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-8bxt2" Dec 10 15:58:51 crc kubenswrapper[5114]: I1210 15:58:51.846324 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f89cn\" (UniqueName: \"kubernetes.io/projected/3482f330-553d-46bb-890c-c4bef1677c86-kube-api-access-f89cn\") pod \"obo-prometheus-operator-86648f486b-8bxt2\" (UID: \"3482f330-553d-46bb-890c-c4bef1677c86\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-8bxt2" Dec 10 15:58:51 crc kubenswrapper[5114]: I1210 15:58:51.892372 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e33f561f-5b1c-4541-aad6-74c1286e52e1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7fbc7766cd-7whss\" (UID: \"e33f561f-5b1c-4541-aad6-74c1286e52e1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fbc7766cd-7whss" Dec 10 15:58:51 crc kubenswrapper[5114]: I1210 15:58:51.892627 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e33f561f-5b1c-4541-aad6-74c1286e52e1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7fbc7766cd-7whss\" (UID: \"e33f561f-5b1c-4541-aad6-74c1286e52e1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fbc7766cd-7whss" Dec 10 15:58:51 crc kubenswrapper[5114]: I1210 15:58:51.910826 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-8bxt2" Dec 10 15:58:51 crc kubenswrapper[5114]: I1210 15:58:51.993492 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e33f561f-5b1c-4541-aad6-74c1286e52e1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7fbc7766cd-7whss\" (UID: \"e33f561f-5b1c-4541-aad6-74c1286e52e1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fbc7766cd-7whss" Dec 10 15:58:51 crc kubenswrapper[5114]: I1210 15:58:51.993812 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e33f561f-5b1c-4541-aad6-74c1286e52e1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7fbc7766cd-7whss\" (UID: \"e33f561f-5b1c-4541-aad6-74c1286e52e1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fbc7766cd-7whss" Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.009516 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e33f561f-5b1c-4541-aad6-74c1286e52e1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7fbc7766cd-7whss\" (UID: \"e33f561f-5b1c-4541-aad6-74c1286e52e1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fbc7766cd-7whss" Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.009730 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e33f561f-5b1c-4541-aad6-74c1286e52e1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7fbc7766cd-7whss\" (UID: \"e33f561f-5b1c-4541-aad6-74c1286e52e1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fbc7766cd-7whss" Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.012859 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7fbc7766cd-kbbs9"] Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.013012 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fbc7766cd-kbbs9" Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.036810 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-78c97476f4-rs98d"] Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.088607 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fbc7766cd-7whss" Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.095063 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f0b78b24-6c78-441d-aacb-cb3e5f008be4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7fbc7766cd-kbbs9\" (UID: \"f0b78b24-6c78-441d-aacb-cb3e5f008be4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fbc7766cd-kbbs9" Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.095148 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f0b78b24-6c78-441d-aacb-cb3e5f008be4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7fbc7766cd-kbbs9\" (UID: \"f0b78b24-6c78-441d-aacb-cb3e5f008be4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fbc7766cd-kbbs9" Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.130554 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-rs98d"] Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.130761 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-rs98d" Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.137653 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.138241 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-bcbxj\"" Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.146812 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-t4tzm"] Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.153807 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-t4tzm" Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.156653 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-r2mgb\"" Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.166010 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-t4tzm"] Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.196190 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ae8dba87-5b6e-4a59-849a-d5bd2c458f24-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-t4tzm\" (UID: \"ae8dba87-5b6e-4a59-849a-d5bd2c458f24\") " pod="openshift-operators/perses-operator-68bdb49cbf-t4tzm" Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.196232 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rphtc\" (UniqueName: \"kubernetes.io/projected/ae8dba87-5b6e-4a59-849a-d5bd2c458f24-kube-api-access-rphtc\") pod \"perses-operator-68bdb49cbf-t4tzm\" (UID: \"ae8dba87-5b6e-4a59-849a-d5bd2c458f24\") " pod="openshift-operators/perses-operator-68bdb49cbf-t4tzm" Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.196261 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f0b78b24-6c78-441d-aacb-cb3e5f008be4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7fbc7766cd-kbbs9\" (UID: \"f0b78b24-6c78-441d-aacb-cb3e5f008be4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fbc7766cd-kbbs9" Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.196335 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md6z8\" (UniqueName: \"kubernetes.io/projected/86594c71-ecb1-4858-8ab4-875367b6583c-kube-api-access-md6z8\") pod \"observability-operator-78c97476f4-rs98d\" (UID: \"86594c71-ecb1-4858-8ab4-875367b6583c\") " pod="openshift-operators/observability-operator-78c97476f4-rs98d" Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.196367 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/86594c71-ecb1-4858-8ab4-875367b6583c-observability-operator-tls\") pod \"observability-operator-78c97476f4-rs98d\" (UID: \"86594c71-ecb1-4858-8ab4-875367b6583c\") " pod="openshift-operators/observability-operator-78c97476f4-rs98d" Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.196392 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f0b78b24-6c78-441d-aacb-cb3e5f008be4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7fbc7766cd-kbbs9\" (UID: \"f0b78b24-6c78-441d-aacb-cb3e5f008be4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fbc7766cd-kbbs9" Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.200145 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f0b78b24-6c78-441d-aacb-cb3e5f008be4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7fbc7766cd-kbbs9\" (UID: \"f0b78b24-6c78-441d-aacb-cb3e5f008be4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fbc7766cd-kbbs9" Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.200294 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f0b78b24-6c78-441d-aacb-cb3e5f008be4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7fbc7766cd-kbbs9\" (UID: \"f0b78b24-6c78-441d-aacb-cb3e5f008be4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fbc7766cd-kbbs9" Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.297347 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-md6z8\" (UniqueName: \"kubernetes.io/projected/86594c71-ecb1-4858-8ab4-875367b6583c-kube-api-access-md6z8\") pod \"observability-operator-78c97476f4-rs98d\" (UID: \"86594c71-ecb1-4858-8ab4-875367b6583c\") " pod="openshift-operators/observability-operator-78c97476f4-rs98d" Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.297506 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/86594c71-ecb1-4858-8ab4-875367b6583c-observability-operator-tls\") pod \"observability-operator-78c97476f4-rs98d\" (UID: \"86594c71-ecb1-4858-8ab4-875367b6583c\") " pod="openshift-operators/observability-operator-78c97476f4-rs98d" Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.297629 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ae8dba87-5b6e-4a59-849a-d5bd2c458f24-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-t4tzm\" (UID: \"ae8dba87-5b6e-4a59-849a-d5bd2c458f24\") " pod="openshift-operators/perses-operator-68bdb49cbf-t4tzm" Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.297739 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rphtc\" (UniqueName: \"kubernetes.io/projected/ae8dba87-5b6e-4a59-849a-d5bd2c458f24-kube-api-access-rphtc\") pod \"perses-operator-68bdb49cbf-t4tzm\" (UID: \"ae8dba87-5b6e-4a59-849a-d5bd2c458f24\") " pod="openshift-operators/perses-operator-68bdb49cbf-t4tzm" Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.300886 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ae8dba87-5b6e-4a59-849a-d5bd2c458f24-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-t4tzm\" (UID: \"ae8dba87-5b6e-4a59-849a-d5bd2c458f24\") " pod="openshift-operators/perses-operator-68bdb49cbf-t4tzm" Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.304930 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/86594c71-ecb1-4858-8ab4-875367b6583c-observability-operator-tls\") pod \"observability-operator-78c97476f4-rs98d\" (UID: \"86594c71-ecb1-4858-8ab4-875367b6583c\") " pod="openshift-operators/observability-operator-78c97476f4-rs98d" Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.315674 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rphtc\" (UniqueName: \"kubernetes.io/projected/ae8dba87-5b6e-4a59-849a-d5bd2c458f24-kube-api-access-rphtc\") pod \"perses-operator-68bdb49cbf-t4tzm\" (UID: \"ae8dba87-5b6e-4a59-849a-d5bd2c458f24\") " pod="openshift-operators/perses-operator-68bdb49cbf-t4tzm" Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.317710 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-md6z8\" (UniqueName: \"kubernetes.io/projected/86594c71-ecb1-4858-8ab4-875367b6583c-kube-api-access-md6z8\") pod \"observability-operator-78c97476f4-rs98d\" (UID: \"86594c71-ecb1-4858-8ab4-875367b6583c\") " pod="openshift-operators/observability-operator-78c97476f4-rs98d" Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.361525 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fbc7766cd-kbbs9" Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.373619 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dd867" podUID="086f5b30-8c63-4ee1-8f52-8734702f2afe" containerName="registry-server" probeResult="failure" output=< Dec 10 15:58:52 crc kubenswrapper[5114]: timeout: failed to connect service ":50051" within 1s Dec 10 15:58:52 crc kubenswrapper[5114]: > Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.448727 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-rs98d" Dec 10 15:58:52 crc kubenswrapper[5114]: I1210 15:58:52.470415 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-t4tzm" Dec 10 15:58:54 crc kubenswrapper[5114]: I1210 15:58:54.489181 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dv9k2" event={"ID":"a6274885-0329-40d3-bfc5-b1dcb367b221","Type":"ContainerStarted","Data":"a14b6e339421cd62a9fc4904bd667babf86058df48a2d7e90e7fb0dca5706ddc"} Dec 10 15:58:54 crc kubenswrapper[5114]: I1210 15:58:54.492752 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am8g9d" event={"ID":"8103a067-d904-4355-93ee-d0d0d91dc987","Type":"ContainerStarted","Data":"0f223c30b1cbeae8d4678211ff4d406f1446c68c7ced81b2143fd3c11877a468"} Dec 10 15:58:54 crc kubenswrapper[5114]: I1210 15:58:54.518045 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dv9k2" podStartSLOduration=7.316552977 podStartE2EDuration="8.51802585s" podCreationTimestamp="2025-12-10 15:58:46 +0000 UTC" firstStartedPulling="2025-12-10 15:58:47.313905203 +0000 UTC m=+753.034706380" lastFinishedPulling="2025-12-10 15:58:48.515378076 +0000 UTC m=+754.236179253" observedRunningTime="2025-12-10 15:58:54.516871944 +0000 UTC m=+760.237673141" watchObservedRunningTime="2025-12-10 15:58:54.51802585 +0000 UTC m=+760.238827047" Dec 10 15:58:54 crc kubenswrapper[5114]: I1210 15:58:54.532379 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-rs98d"] Dec 10 15:58:54 crc kubenswrapper[5114]: I1210 15:58:54.557896 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-8bxt2"] Dec 10 15:58:54 crc kubenswrapper[5114]: W1210 15:58:54.572605 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3482f330_553d_46bb_890c_c4bef1677c86.slice/crio-7a5441094649510990cd47d603de265511d46a1e59240f35074c4d0afd19660c WatchSource:0}: Error finding container 7a5441094649510990cd47d603de265511d46a1e59240f35074c4d0afd19660c: Status 404 returned error can't find the container with id 7a5441094649510990cd47d603de265511d46a1e59240f35074c4d0afd19660c Dec 10 15:58:54 crc kubenswrapper[5114]: I1210 15:58:54.595606 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-t4tzm"] Dec 10 15:58:54 crc kubenswrapper[5114]: I1210 15:58:54.955576 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7fbc7766cd-7whss"] Dec 10 15:58:54 crc kubenswrapper[5114]: I1210 15:58:54.959524 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7fbc7766cd-kbbs9"] Dec 10 15:58:54 crc kubenswrapper[5114]: W1210 15:58:54.964619 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode33f561f_5b1c_4541_aad6_74c1286e52e1.slice/crio-4f373bcc3c793332c7454a9b6a085c824300791f5b1c1c564563f5bda2d95a4a WatchSource:0}: Error finding container 4f373bcc3c793332c7454a9b6a085c824300791f5b1c1c564563f5bda2d95a4a: Status 404 returned error can't find the container with id 4f373bcc3c793332c7454a9b6a085c824300791f5b1c1c564563f5bda2d95a4a Dec 10 15:58:54 crc kubenswrapper[5114]: W1210 15:58:54.984762 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0b78b24_6c78_441d_aacb_cb3e5f008be4.slice/crio-c8bc2957a2fdba514a2727a09570865efaa5daf175b4e1a8c1cb3911eb68edfd WatchSource:0}: Error finding container c8bc2957a2fdba514a2727a09570865efaa5daf175b4e1a8c1cb3911eb68edfd: Status 404 returned error can't find the container with id c8bc2957a2fdba514a2727a09570865efaa5daf175b4e1a8c1cb3911eb68edfd Dec 10 15:58:55 crc kubenswrapper[5114]: I1210 15:58:55.498758 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-t4tzm" event={"ID":"ae8dba87-5b6e-4a59-849a-d5bd2c458f24","Type":"ContainerStarted","Data":"8078ea7fe2b1001a2b6548965b4b895e0cf0f374ce6a30c86ffb7e0154cdad40"} Dec 10 15:58:55 crc kubenswrapper[5114]: I1210 15:58:55.499625 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fbc7766cd-kbbs9" event={"ID":"f0b78b24-6c78-441d-aacb-cb3e5f008be4","Type":"ContainerStarted","Data":"c8bc2957a2fdba514a2727a09570865efaa5daf175b4e1a8c1cb3911eb68edfd"} Dec 10 15:58:55 crc kubenswrapper[5114]: I1210 15:58:55.500675 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-8bxt2" event={"ID":"3482f330-553d-46bb-890c-c4bef1677c86","Type":"ContainerStarted","Data":"7a5441094649510990cd47d603de265511d46a1e59240f35074c4d0afd19660c"} Dec 10 15:58:55 crc kubenswrapper[5114]: I1210 15:58:55.501480 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-rs98d" event={"ID":"86594c71-ecb1-4858-8ab4-875367b6583c","Type":"ContainerStarted","Data":"663b4ef9c101f2e04dd31ad4c47bf4b87d20b3a100de012e742daf8413934207"} Dec 10 15:58:55 crc kubenswrapper[5114]: I1210 15:58:55.502218 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fbc7766cd-7whss" event={"ID":"e33f561f-5b1c-4541-aad6-74c1286e52e1","Type":"ContainerStarted","Data":"4f373bcc3c793332c7454a9b6a085c824300791f5b1c1c564563f5bda2d95a4a"} Dec 10 15:58:55 crc kubenswrapper[5114]: I1210 15:58:55.503537 5114 generic.go:358] "Generic (PLEG): container finished" podID="8103a067-d904-4355-93ee-d0d0d91dc987" containerID="0f223c30b1cbeae8d4678211ff4d406f1446c68c7ced81b2143fd3c11877a468" exitCode=0 Dec 10 15:58:55 crc kubenswrapper[5114]: I1210 15:58:55.503592 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am8g9d" event={"ID":"8103a067-d904-4355-93ee-d0d0d91dc987","Type":"ContainerDied","Data":"0f223c30b1cbeae8d4678211ff4d406f1446c68c7ced81b2143fd3c11877a468"} Dec 10 15:58:56 crc kubenswrapper[5114]: I1210 15:58:56.495962 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dv9k2" Dec 10 15:58:56 crc kubenswrapper[5114]: I1210 15:58:56.496342 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-dv9k2" Dec 10 15:58:56 crc kubenswrapper[5114]: I1210 15:58:56.526109 5114 generic.go:358] "Generic (PLEG): container finished" podID="8103a067-d904-4355-93ee-d0d0d91dc987" containerID="a9f34b20a1d3a12212613ce2b11d6b29758add1eac8c2a3ef8480aedee15f9ec" exitCode=0 Dec 10 15:58:56 crc kubenswrapper[5114]: I1210 15:58:56.526446 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am8g9d" event={"ID":"8103a067-d904-4355-93ee-d0d0d91dc987","Type":"ContainerDied","Data":"a9f34b20a1d3a12212613ce2b11d6b29758add1eac8c2a3ef8480aedee15f9ec"} Dec 10 15:58:56 crc kubenswrapper[5114]: I1210 15:58:56.550078 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dv9k2" Dec 10 15:58:56 crc kubenswrapper[5114]: I1210 15:58:56.981672 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-774974c745-8slrq"] Dec 10 15:58:56 crc kubenswrapper[5114]: I1210 15:58:56.994620 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-774974c745-8slrq" Dec 10 15:58:56 crc kubenswrapper[5114]: I1210 15:58:56.997801 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Dec 10 15:58:56 crc kubenswrapper[5114]: I1210 15:58:56.997813 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Dec 10 15:58:57 crc kubenswrapper[5114]: I1210 15:58:57.003468 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-774974c745-8slrq"] Dec 10 15:58:57 crc kubenswrapper[5114]: I1210 15:58:57.004077 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-xstml\"" Dec 10 15:58:57 crc kubenswrapper[5114]: I1210 15:58:57.004374 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Dec 10 15:58:57 crc kubenswrapper[5114]: I1210 15:58:57.066079 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/54167739-accb-4d18-99df-84ea5f3527e6-webhook-cert\") pod \"elastic-operator-774974c745-8slrq\" (UID: \"54167739-accb-4d18-99df-84ea5f3527e6\") " pod="service-telemetry/elastic-operator-774974c745-8slrq" Dec 10 15:58:57 crc kubenswrapper[5114]: I1210 15:58:57.066141 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/54167739-accb-4d18-99df-84ea5f3527e6-apiservice-cert\") pod \"elastic-operator-774974c745-8slrq\" (UID: \"54167739-accb-4d18-99df-84ea5f3527e6\") " pod="service-telemetry/elastic-operator-774974c745-8slrq" Dec 10 15:58:57 crc kubenswrapper[5114]: I1210 15:58:57.066197 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp4qg\" (UniqueName: \"kubernetes.io/projected/54167739-accb-4d18-99df-84ea5f3527e6-kube-api-access-jp4qg\") pod \"elastic-operator-774974c745-8slrq\" (UID: \"54167739-accb-4d18-99df-84ea5f3527e6\") " pod="service-telemetry/elastic-operator-774974c745-8slrq" Dec 10 15:58:57 crc kubenswrapper[5114]: I1210 15:58:57.167420 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/54167739-accb-4d18-99df-84ea5f3527e6-webhook-cert\") pod \"elastic-operator-774974c745-8slrq\" (UID: \"54167739-accb-4d18-99df-84ea5f3527e6\") " pod="service-telemetry/elastic-operator-774974c745-8slrq" Dec 10 15:58:57 crc kubenswrapper[5114]: I1210 15:58:57.167497 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/54167739-accb-4d18-99df-84ea5f3527e6-apiservice-cert\") pod \"elastic-operator-774974c745-8slrq\" (UID: \"54167739-accb-4d18-99df-84ea5f3527e6\") " pod="service-telemetry/elastic-operator-774974c745-8slrq" Dec 10 15:58:57 crc kubenswrapper[5114]: I1210 15:58:57.167565 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jp4qg\" (UniqueName: \"kubernetes.io/projected/54167739-accb-4d18-99df-84ea5f3527e6-kube-api-access-jp4qg\") pod \"elastic-operator-774974c745-8slrq\" (UID: \"54167739-accb-4d18-99df-84ea5f3527e6\") " pod="service-telemetry/elastic-operator-774974c745-8slrq" Dec 10 15:58:57 crc kubenswrapper[5114]: I1210 15:58:57.180227 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/54167739-accb-4d18-99df-84ea5f3527e6-webhook-cert\") pod \"elastic-operator-774974c745-8slrq\" (UID: \"54167739-accb-4d18-99df-84ea5f3527e6\") " pod="service-telemetry/elastic-operator-774974c745-8slrq" Dec 10 15:58:57 crc kubenswrapper[5114]: I1210 15:58:57.183031 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/54167739-accb-4d18-99df-84ea5f3527e6-apiservice-cert\") pod \"elastic-operator-774974c745-8slrq\" (UID: \"54167739-accb-4d18-99df-84ea5f3527e6\") " pod="service-telemetry/elastic-operator-774974c745-8slrq" Dec 10 15:58:57 crc kubenswrapper[5114]: I1210 15:58:57.209350 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp4qg\" (UniqueName: \"kubernetes.io/projected/54167739-accb-4d18-99df-84ea5f3527e6-kube-api-access-jp4qg\") pod \"elastic-operator-774974c745-8slrq\" (UID: \"54167739-accb-4d18-99df-84ea5f3527e6\") " pod="service-telemetry/elastic-operator-774974c745-8slrq" Dec 10 15:58:57 crc kubenswrapper[5114]: I1210 15:58:57.312352 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-774974c745-8slrq" Dec 10 15:58:57 crc kubenswrapper[5114]: I1210 15:58:57.775898 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am8g9d" Dec 10 15:58:57 crc kubenswrapper[5114]: I1210 15:58:57.809245 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-774974c745-8slrq"] Dec 10 15:58:57 crc kubenswrapper[5114]: W1210 15:58:57.844805 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54167739_accb_4d18_99df_84ea5f3527e6.slice/crio-2b6ed54cf19f79b810cc34052277fd569bc31c4537806c86905fb70ff0b9fb80 WatchSource:0}: Error finding container 2b6ed54cf19f79b810cc34052277fd569bc31c4537806c86905fb70ff0b9fb80: Status 404 returned error can't find the container with id 2b6ed54cf19f79b810cc34052277fd569bc31c4537806c86905fb70ff0b9fb80 Dec 10 15:58:57 crc kubenswrapper[5114]: I1210 15:58:57.875587 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8103a067-d904-4355-93ee-d0d0d91dc987-util\") pod \"8103a067-d904-4355-93ee-d0d0d91dc987\" (UID: \"8103a067-d904-4355-93ee-d0d0d91dc987\") " Dec 10 15:58:57 crc kubenswrapper[5114]: I1210 15:58:57.875670 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlkql\" (UniqueName: \"kubernetes.io/projected/8103a067-d904-4355-93ee-d0d0d91dc987-kube-api-access-vlkql\") pod \"8103a067-d904-4355-93ee-d0d0d91dc987\" (UID: \"8103a067-d904-4355-93ee-d0d0d91dc987\") " Dec 10 15:58:57 crc kubenswrapper[5114]: I1210 15:58:57.875957 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8103a067-d904-4355-93ee-d0d0d91dc987-bundle\") pod \"8103a067-d904-4355-93ee-d0d0d91dc987\" (UID: \"8103a067-d904-4355-93ee-d0d0d91dc987\") " Dec 10 15:58:57 crc kubenswrapper[5114]: I1210 15:58:57.877944 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8103a067-d904-4355-93ee-d0d0d91dc987-bundle" (OuterVolumeSpecName: "bundle") pod "8103a067-d904-4355-93ee-d0d0d91dc987" (UID: "8103a067-d904-4355-93ee-d0d0d91dc987"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:58:57 crc kubenswrapper[5114]: I1210 15:58:57.884443 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8103a067-d904-4355-93ee-d0d0d91dc987-kube-api-access-vlkql" (OuterVolumeSpecName: "kube-api-access-vlkql") pod "8103a067-d904-4355-93ee-d0d0d91dc987" (UID: "8103a067-d904-4355-93ee-d0d0d91dc987"). InnerVolumeSpecName "kube-api-access-vlkql". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:58:57 crc kubenswrapper[5114]: I1210 15:58:57.889223 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8103a067-d904-4355-93ee-d0d0d91dc987-util" (OuterVolumeSpecName: "util") pod "8103a067-d904-4355-93ee-d0d0d91dc987" (UID: "8103a067-d904-4355-93ee-d0d0d91dc987"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:58:57 crc kubenswrapper[5114]: I1210 15:58:57.982990 5114 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8103a067-d904-4355-93ee-d0d0d91dc987-util\") on node \"crc\" DevicePath \"\"" Dec 10 15:58:57 crc kubenswrapper[5114]: I1210 15:58:57.983026 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vlkql\" (UniqueName: \"kubernetes.io/projected/8103a067-d904-4355-93ee-d0d0d91dc987-kube-api-access-vlkql\") on node \"crc\" DevicePath \"\"" Dec 10 15:58:57 crc kubenswrapper[5114]: I1210 15:58:57.983036 5114 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8103a067-d904-4355-93ee-d0d0d91dc987-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:58:58 crc kubenswrapper[5114]: I1210 15:58:58.532601 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-bqv8d"] Dec 10 15:58:58 crc kubenswrapper[5114]: I1210 15:58:58.533234 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8103a067-d904-4355-93ee-d0d0d91dc987" containerName="pull" Dec 10 15:58:58 crc kubenswrapper[5114]: I1210 15:58:58.533252 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="8103a067-d904-4355-93ee-d0d0d91dc987" containerName="pull" Dec 10 15:58:58 crc kubenswrapper[5114]: I1210 15:58:58.533263 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8103a067-d904-4355-93ee-d0d0d91dc987" containerName="util" Dec 10 15:58:58 crc kubenswrapper[5114]: I1210 15:58:58.533282 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="8103a067-d904-4355-93ee-d0d0d91dc987" containerName="util" Dec 10 15:58:58 crc kubenswrapper[5114]: I1210 15:58:58.533312 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8103a067-d904-4355-93ee-d0d0d91dc987" containerName="extract" Dec 10 15:58:58 crc kubenswrapper[5114]: I1210 15:58:58.533318 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="8103a067-d904-4355-93ee-d0d0d91dc987" containerName="extract" Dec 10 15:58:58 crc kubenswrapper[5114]: I1210 15:58:58.533408 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="8103a067-d904-4355-93ee-d0d0d91dc987" containerName="extract" Dec 10 15:58:58 crc kubenswrapper[5114]: I1210 15:58:58.536445 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-bqv8d" Dec 10 15:58:58 crc kubenswrapper[5114]: I1210 15:58:58.538857 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-f67dw\"" Dec 10 15:58:58 crc kubenswrapper[5114]: I1210 15:58:58.544412 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-bqv8d"] Dec 10 15:58:58 crc kubenswrapper[5114]: I1210 15:58:58.572912 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am8g9d" Dec 10 15:58:58 crc kubenswrapper[5114]: I1210 15:58:58.586700 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am8g9d" event={"ID":"8103a067-d904-4355-93ee-d0d0d91dc987","Type":"ContainerDied","Data":"c90f00320ce7fd359c777accd4f7f216fc34a6cb46c29c9fb6dcf9bfa799f091"} Dec 10 15:58:58 crc kubenswrapper[5114]: I1210 15:58:58.587161 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c90f00320ce7fd359c777accd4f7f216fc34a6cb46c29c9fb6dcf9bfa799f091" Dec 10 15:58:58 crc kubenswrapper[5114]: I1210 15:58:58.587177 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-774974c745-8slrq" event={"ID":"54167739-accb-4d18-99df-84ea5f3527e6","Type":"ContainerStarted","Data":"2b6ed54cf19f79b810cc34052277fd569bc31c4537806c86905fb70ff0b9fb80"} Dec 10 15:58:58 crc kubenswrapper[5114]: I1210 15:58:58.697532 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgkqt\" (UniqueName: \"kubernetes.io/projected/fa91a597-e74d-4e3c-b44c-a70bee4ea851-kube-api-access-cgkqt\") pod \"interconnect-operator-78b9bd8798-bqv8d\" (UID: \"fa91a597-e74d-4e3c-b44c-a70bee4ea851\") " pod="service-telemetry/interconnect-operator-78b9bd8798-bqv8d" Dec 10 15:58:58 crc kubenswrapper[5114]: I1210 15:58:58.799205 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cgkqt\" (UniqueName: \"kubernetes.io/projected/fa91a597-e74d-4e3c-b44c-a70bee4ea851-kube-api-access-cgkqt\") pod \"interconnect-operator-78b9bd8798-bqv8d\" (UID: \"fa91a597-e74d-4e3c-b44c-a70bee4ea851\") " pod="service-telemetry/interconnect-operator-78b9bd8798-bqv8d" Dec 10 15:58:58 crc kubenswrapper[5114]: I1210 15:58:58.816665 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgkqt\" (UniqueName: \"kubernetes.io/projected/fa91a597-e74d-4e3c-b44c-a70bee4ea851-kube-api-access-cgkqt\") pod \"interconnect-operator-78b9bd8798-bqv8d\" (UID: \"fa91a597-e74d-4e3c-b44c-a70bee4ea851\") " pod="service-telemetry/interconnect-operator-78b9bd8798-bqv8d" Dec 10 15:58:58 crc kubenswrapper[5114]: I1210 15:58:58.854069 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-bqv8d" Dec 10 15:58:59 crc kubenswrapper[5114]: I1210 15:58:59.149647 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-bqv8d"] Dec 10 15:58:59 crc kubenswrapper[5114]: W1210 15:58:59.182005 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa91a597_e74d_4e3c_b44c_a70bee4ea851.slice/crio-150ff395044bdf8d4e61dc392c903f438d43a0460cc9a2102786e39e19d9e80a WatchSource:0}: Error finding container 150ff395044bdf8d4e61dc392c903f438d43a0460cc9a2102786e39e19d9e80a: Status 404 returned error can't find the container with id 150ff395044bdf8d4e61dc392c903f438d43a0460cc9a2102786e39e19d9e80a Dec 10 15:58:59 crc kubenswrapper[5114]: I1210 15:58:59.607413 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-bqv8d" event={"ID":"fa91a597-e74d-4e3c-b44c-a70bee4ea851","Type":"ContainerStarted","Data":"150ff395044bdf8d4e61dc392c903f438d43a0460cc9a2102786e39e19d9e80a"} Dec 10 15:59:01 crc kubenswrapper[5114]: I1210 15:59:01.377652 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dd867" Dec 10 15:59:01 crc kubenswrapper[5114]: I1210 15:59:01.417813 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dd867" Dec 10 15:59:05 crc kubenswrapper[5114]: I1210 15:59:05.557494 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dd867"] Dec 10 15:59:05 crc kubenswrapper[5114]: I1210 15:59:05.558131 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dd867" podUID="086f5b30-8c63-4ee1-8f52-8734702f2afe" containerName="registry-server" containerID="cri-o://4eda9f1726d80f3550f204dbcd27559dcd08f9e30c458660ed80b70a06b96a89" gracePeriod=2 Dec 10 15:59:06 crc kubenswrapper[5114]: I1210 15:59:06.593073 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dv9k2" Dec 10 15:59:06 crc kubenswrapper[5114]: I1210 15:59:06.663065 5114 generic.go:358] "Generic (PLEG): container finished" podID="086f5b30-8c63-4ee1-8f52-8734702f2afe" containerID="4eda9f1726d80f3550f204dbcd27559dcd08f9e30c458660ed80b70a06b96a89" exitCode=0 Dec 10 15:59:06 crc kubenswrapper[5114]: I1210 15:59:06.663361 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dd867" event={"ID":"086f5b30-8c63-4ee1-8f52-8734702f2afe","Type":"ContainerDied","Data":"4eda9f1726d80f3550f204dbcd27559dcd08f9e30c458660ed80b70a06b96a89"} Dec 10 15:59:09 crc kubenswrapper[5114]: I1210 15:59:09.935775 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dd867" Dec 10 15:59:10 crc kubenswrapper[5114]: I1210 15:59:10.010685 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6mwg\" (UniqueName: \"kubernetes.io/projected/086f5b30-8c63-4ee1-8f52-8734702f2afe-kube-api-access-r6mwg\") pod \"086f5b30-8c63-4ee1-8f52-8734702f2afe\" (UID: \"086f5b30-8c63-4ee1-8f52-8734702f2afe\") " Dec 10 15:59:10 crc kubenswrapper[5114]: I1210 15:59:10.010738 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/086f5b30-8c63-4ee1-8f52-8734702f2afe-catalog-content\") pod \"086f5b30-8c63-4ee1-8f52-8734702f2afe\" (UID: \"086f5b30-8c63-4ee1-8f52-8734702f2afe\") " Dec 10 15:59:10 crc kubenswrapper[5114]: I1210 15:59:10.010773 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/086f5b30-8c63-4ee1-8f52-8734702f2afe-utilities\") pod \"086f5b30-8c63-4ee1-8f52-8734702f2afe\" (UID: \"086f5b30-8c63-4ee1-8f52-8734702f2afe\") " Dec 10 15:59:10 crc kubenswrapper[5114]: I1210 15:59:10.013958 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/086f5b30-8c63-4ee1-8f52-8734702f2afe-utilities" (OuterVolumeSpecName: "utilities") pod "086f5b30-8c63-4ee1-8f52-8734702f2afe" (UID: "086f5b30-8c63-4ee1-8f52-8734702f2afe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:59:10 crc kubenswrapper[5114]: I1210 15:59:10.034178 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/086f5b30-8c63-4ee1-8f52-8734702f2afe-kube-api-access-r6mwg" (OuterVolumeSpecName: "kube-api-access-r6mwg") pod "086f5b30-8c63-4ee1-8f52-8734702f2afe" (UID: "086f5b30-8c63-4ee1-8f52-8734702f2afe"). InnerVolumeSpecName "kube-api-access-r6mwg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:59:10 crc kubenswrapper[5114]: I1210 15:59:10.113357 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r6mwg\" (UniqueName: \"kubernetes.io/projected/086f5b30-8c63-4ee1-8f52-8734702f2afe-kube-api-access-r6mwg\") on node \"crc\" DevicePath \"\"" Dec 10 15:59:10 crc kubenswrapper[5114]: I1210 15:59:10.113399 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/086f5b30-8c63-4ee1-8f52-8734702f2afe-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:59:10 crc kubenswrapper[5114]: I1210 15:59:10.123765 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/086f5b30-8c63-4ee1-8f52-8734702f2afe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "086f5b30-8c63-4ee1-8f52-8734702f2afe" (UID: "086f5b30-8c63-4ee1-8f52-8734702f2afe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:59:10 crc kubenswrapper[5114]: I1210 15:59:10.216500 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/086f5b30-8c63-4ee1-8f52-8734702f2afe-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:59:10 crc kubenswrapper[5114]: I1210 15:59:10.694537 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dd867" event={"ID":"086f5b30-8c63-4ee1-8f52-8734702f2afe","Type":"ContainerDied","Data":"bb2e102193445517653a0690082afab037bfc9aea9d2618b20850bc9c694506e"} Dec 10 15:59:10 crc kubenswrapper[5114]: I1210 15:59:10.694602 5114 scope.go:117] "RemoveContainer" containerID="4eda9f1726d80f3550f204dbcd27559dcd08f9e30c458660ed80b70a06b96a89" Dec 10 15:59:10 crc kubenswrapper[5114]: I1210 15:59:10.694661 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dd867" Dec 10 15:59:10 crc kubenswrapper[5114]: I1210 15:59:10.715956 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dd867"] Dec 10 15:59:10 crc kubenswrapper[5114]: I1210 15:59:10.721174 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dd867"] Dec 10 15:59:10 crc kubenswrapper[5114]: I1210 15:59:10.972365 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dv9k2"] Dec 10 15:59:10 crc kubenswrapper[5114]: I1210 15:59:10.972755 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dv9k2" podUID="a6274885-0329-40d3-bfc5-b1dcb367b221" containerName="registry-server" containerID="cri-o://a14b6e339421cd62a9fc4904bd667babf86058df48a2d7e90e7fb0dca5706ddc" gracePeriod=2 Dec 10 15:59:11 crc kubenswrapper[5114]: I1210 15:59:11.703731 5114 generic.go:358] "Generic (PLEG): container finished" podID="a6274885-0329-40d3-bfc5-b1dcb367b221" containerID="a14b6e339421cd62a9fc4904bd667babf86058df48a2d7e90e7fb0dca5706ddc" exitCode=0 Dec 10 15:59:11 crc kubenswrapper[5114]: I1210 15:59:11.703984 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dv9k2" event={"ID":"a6274885-0329-40d3-bfc5-b1dcb367b221","Type":"ContainerDied","Data":"a14b6e339421cd62a9fc4904bd667babf86058df48a2d7e90e7fb0dca5706ddc"} Dec 10 15:59:12 crc kubenswrapper[5114]: I1210 15:59:12.578437 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="086f5b30-8c63-4ee1-8f52-8734702f2afe" path="/var/lib/kubelet/pods/086f5b30-8c63-4ee1-8f52-8734702f2afe/volumes" Dec 10 15:59:12 crc kubenswrapper[5114]: I1210 15:59:12.921395 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-cv9zn"] Dec 10 15:59:12 crc kubenswrapper[5114]: I1210 15:59:12.922019 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="086f5b30-8c63-4ee1-8f52-8734702f2afe" containerName="extract-content" Dec 10 15:59:12 crc kubenswrapper[5114]: I1210 15:59:12.922043 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="086f5b30-8c63-4ee1-8f52-8734702f2afe" containerName="extract-content" Dec 10 15:59:12 crc kubenswrapper[5114]: I1210 15:59:12.922075 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="086f5b30-8c63-4ee1-8f52-8734702f2afe" containerName="registry-server" Dec 10 15:59:12 crc kubenswrapper[5114]: I1210 15:59:12.922083 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="086f5b30-8c63-4ee1-8f52-8734702f2afe" containerName="registry-server" Dec 10 15:59:12 crc kubenswrapper[5114]: I1210 15:59:12.922117 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="086f5b30-8c63-4ee1-8f52-8734702f2afe" containerName="extract-utilities" Dec 10 15:59:12 crc kubenswrapper[5114]: I1210 15:59:12.922128 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="086f5b30-8c63-4ee1-8f52-8734702f2afe" containerName="extract-utilities" Dec 10 15:59:12 crc kubenswrapper[5114]: I1210 15:59:12.922236 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="086f5b30-8c63-4ee1-8f52-8734702f2afe" containerName="registry-server" Dec 10 15:59:12 crc kubenswrapper[5114]: I1210 15:59:12.945522 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-cv9zn"] Dec 10 15:59:12 crc kubenswrapper[5114]: I1210 15:59:12.945665 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-cv9zn" Dec 10 15:59:12 crc kubenswrapper[5114]: I1210 15:59:12.948264 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-xb97h\"" Dec 10 15:59:12 crc kubenswrapper[5114]: I1210 15:59:12.948545 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Dec 10 15:59:12 crc kubenswrapper[5114]: I1210 15:59:12.948645 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Dec 10 15:59:13 crc kubenswrapper[5114]: I1210 15:59:13.054856 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ee10cec-658b-4304-aab2-66e77834a6e8-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-cv9zn\" (UID: \"4ee10cec-658b-4304-aab2-66e77834a6e8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-cv9zn" Dec 10 15:59:13 crc kubenswrapper[5114]: I1210 15:59:13.055103 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgzb8\" (UniqueName: \"kubernetes.io/projected/4ee10cec-658b-4304-aab2-66e77834a6e8-kube-api-access-tgzb8\") pod \"cert-manager-operator-controller-manager-64c74584c4-cv9zn\" (UID: \"4ee10cec-658b-4304-aab2-66e77834a6e8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-cv9zn" Dec 10 15:59:13 crc kubenswrapper[5114]: I1210 15:59:13.156596 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ee10cec-658b-4304-aab2-66e77834a6e8-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-cv9zn\" (UID: \"4ee10cec-658b-4304-aab2-66e77834a6e8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-cv9zn" Dec 10 15:59:13 crc kubenswrapper[5114]: I1210 15:59:13.156679 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tgzb8\" (UniqueName: \"kubernetes.io/projected/4ee10cec-658b-4304-aab2-66e77834a6e8-kube-api-access-tgzb8\") pod \"cert-manager-operator-controller-manager-64c74584c4-cv9zn\" (UID: \"4ee10cec-658b-4304-aab2-66e77834a6e8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-cv9zn" Dec 10 15:59:13 crc kubenswrapper[5114]: I1210 15:59:13.157246 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ee10cec-658b-4304-aab2-66e77834a6e8-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-cv9zn\" (UID: \"4ee10cec-658b-4304-aab2-66e77834a6e8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-cv9zn" Dec 10 15:59:13 crc kubenswrapper[5114]: I1210 15:59:13.177008 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgzb8\" (UniqueName: \"kubernetes.io/projected/4ee10cec-658b-4304-aab2-66e77834a6e8-kube-api-access-tgzb8\") pod \"cert-manager-operator-controller-manager-64c74584c4-cv9zn\" (UID: \"4ee10cec-658b-4304-aab2-66e77834a6e8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-cv9zn" Dec 10 15:59:13 crc kubenswrapper[5114]: I1210 15:59:13.262646 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-cv9zn" Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.108603 5114 scope.go:117] "RemoveContainer" containerID="835fb59fd7b7e19702b283247097ad0b7dc1bfc05e5a05517cf1099ecc06a8e8" Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.176898 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dv9k2" Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.185649 5114 scope.go:117] "RemoveContainer" containerID="126a8f1d8d119ddc0150fffe50d520ecce4bd28e1176be3b5fa36869bd60a6a1" Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.250187 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6274885-0329-40d3-bfc5-b1dcb367b221-utilities\") pod \"a6274885-0329-40d3-bfc5-b1dcb367b221\" (UID: \"a6274885-0329-40d3-bfc5-b1dcb367b221\") " Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.250230 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zstdg\" (UniqueName: \"kubernetes.io/projected/a6274885-0329-40d3-bfc5-b1dcb367b221-kube-api-access-zstdg\") pod \"a6274885-0329-40d3-bfc5-b1dcb367b221\" (UID: \"a6274885-0329-40d3-bfc5-b1dcb367b221\") " Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.250356 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6274885-0329-40d3-bfc5-b1dcb367b221-catalog-content\") pod \"a6274885-0329-40d3-bfc5-b1dcb367b221\" (UID: \"a6274885-0329-40d3-bfc5-b1dcb367b221\") " Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.251445 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6274885-0329-40d3-bfc5-b1dcb367b221-utilities" (OuterVolumeSpecName: "utilities") pod "a6274885-0329-40d3-bfc5-b1dcb367b221" (UID: "a6274885-0329-40d3-bfc5-b1dcb367b221"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.257203 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6274885-0329-40d3-bfc5-b1dcb367b221-kube-api-access-zstdg" (OuterVolumeSpecName: "kube-api-access-zstdg") pod "a6274885-0329-40d3-bfc5-b1dcb367b221" (UID: "a6274885-0329-40d3-bfc5-b1dcb367b221"). InnerVolumeSpecName "kube-api-access-zstdg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.286974 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6274885-0329-40d3-bfc5-b1dcb367b221-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6274885-0329-40d3-bfc5-b1dcb367b221" (UID: "a6274885-0329-40d3-bfc5-b1dcb367b221"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.352132 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6274885-0329-40d3-bfc5-b1dcb367b221-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.352482 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zstdg\" (UniqueName: \"kubernetes.io/projected/a6274885-0329-40d3-bfc5-b1dcb367b221-kube-api-access-zstdg\") on node \"crc\" DevicePath \"\"" Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.352499 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6274885-0329-40d3-bfc5-b1dcb367b221-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.494794 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" podUID="64a2e767-3d9b-4af5-8889-ab3f2b41a071" containerName="registry" containerID="cri-o://b5c085a6a942c7a987a05a5ea8dd9853f7b4cb2bb9e7eca8e3e8d0dd120285ac" gracePeriod=30 Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.586660 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-cv9zn"] Dec 10 15:59:15 crc kubenswrapper[5114]: W1210 15:59:15.605539 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ee10cec_658b_4304_aab2_66e77834a6e8.slice/crio-b936c9f7a28d566b9569c3b06744fc6a1d9f7004b6e80b8b7ed36fbe5fd06bc1 WatchSource:0}: Error finding container b936c9f7a28d566b9569c3b06744fc6a1d9f7004b6e80b8b7ed36fbe5fd06bc1: Status 404 returned error can't find the container with id b936c9f7a28d566b9569c3b06744fc6a1d9f7004b6e80b8b7ed36fbe5fd06bc1 Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.735337 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fbc7766cd-7whss" event={"ID":"e33f561f-5b1c-4541-aad6-74c1286e52e1","Type":"ContainerStarted","Data":"fb015c01ca2a2c15d17be8c51d4fb44744073f8bd4543da1e0b8f37a39053e11"} Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.739426 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-774974c745-8slrq" event={"ID":"54167739-accb-4d18-99df-84ea5f3527e6","Type":"ContainerStarted","Data":"2ead31726570e32faee5a51c33d12dceb9c8cf942817bad4de89034ae3aad310"} Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.741598 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-t4tzm" event={"ID":"ae8dba87-5b6e-4a59-849a-d5bd2c458f24","Type":"ContainerStarted","Data":"2e968c61337ea0086227c408ef81670810abc1ff74180a4ced0c30e54814f627"} Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.742041 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-68bdb49cbf-t4tzm" Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.743595 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-cv9zn" event={"ID":"4ee10cec-658b-4304-aab2-66e77834a6e8","Type":"ContainerStarted","Data":"b936c9f7a28d566b9569c3b06744fc6a1d9f7004b6e80b8b7ed36fbe5fd06bc1"} Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.745031 5114 generic.go:358] "Generic (PLEG): container finished" podID="64a2e767-3d9b-4af5-8889-ab3f2b41a071" containerID="b5c085a6a942c7a987a05a5ea8dd9853f7b4cb2bb9e7eca8e3e8d0dd120285ac" exitCode=0 Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.745113 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" event={"ID":"64a2e767-3d9b-4af5-8889-ab3f2b41a071","Type":"ContainerDied","Data":"b5c085a6a942c7a987a05a5ea8dd9853f7b4cb2bb9e7eca8e3e8d0dd120285ac"} Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.765299 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fbc7766cd-kbbs9" event={"ID":"f0b78b24-6c78-441d-aacb-cb3e5f008be4","Type":"ContainerStarted","Data":"69c4c3382cc6161df7ed47960299f06a7d2b4db2e111e92ae12693b67ded83e6"} Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.768630 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fbc7766cd-7whss" podStartSLOduration=5.25120828 podStartE2EDuration="24.768606681s" podCreationTimestamp="2025-12-10 15:58:51 +0000 UTC" firstStartedPulling="2025-12-10 15:58:54.967675668 +0000 UTC m=+760.688476845" lastFinishedPulling="2025-12-10 15:59:14.485074069 +0000 UTC m=+780.205875246" observedRunningTime="2025-12-10 15:59:15.763119418 +0000 UTC m=+781.483920595" watchObservedRunningTime="2025-12-10 15:59:15.768606681 +0000 UTC m=+781.489407938" Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.777396 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dv9k2" event={"ID":"a6274885-0329-40d3-bfc5-b1dcb367b221","Type":"ContainerDied","Data":"0480b83c38d50c7bb04de685598242f01b871dbd2d90040eec08d0eb75cb7b84"} Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.777463 5114 scope.go:117] "RemoveContainer" containerID="a14b6e339421cd62a9fc4904bd667babf86058df48a2d7e90e7fb0dca5706ddc" Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.777557 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dv9k2" Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.782799 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-8bxt2" event={"ID":"3482f330-553d-46bb-890c-c4bef1677c86","Type":"ContainerStarted","Data":"0d8692d39fd8e24a5fdebd2d89c3d68fefb5d623cc70d39e6a7c1fa946024963"} Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.792806 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-bqv8d" event={"ID":"fa91a597-e74d-4e3c-b44c-a70bee4ea851","Type":"ContainerStarted","Data":"9ea97b493f09a89cae631a74e7302c453cea5d6142d5495607294b1667cdee08"} Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.793336 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-68bdb49cbf-t4tzm" podStartSLOduration=3.277185767 podStartE2EDuration="23.793310323s" podCreationTimestamp="2025-12-10 15:58:52 +0000 UTC" firstStartedPulling="2025-12-10 15:58:54.599331478 +0000 UTC m=+760.320132665" lastFinishedPulling="2025-12-10 15:59:15.115456044 +0000 UTC m=+780.836257221" observedRunningTime="2025-12-10 15:59:15.793048917 +0000 UTC m=+781.513850104" watchObservedRunningTime="2025-12-10 15:59:15.793310323 +0000 UTC m=+781.514111500" Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.796187 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-rs98d" event={"ID":"86594c71-ecb1-4858-8ab4-875367b6583c","Type":"ContainerStarted","Data":"5e806d76b25bd0f7268c7290834e7be2a53e41d761622d366f236cc52a01960b"} Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.796424 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-78c97476f4-rs98d" Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.801075 5114 scope.go:117] "RemoveContainer" containerID="b6ef452f45f421c34eed8f55a0f8f12e38af9794e0dcacb1ac75e2e38c998b82" Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.850945 5114 scope.go:117] "RemoveContainer" containerID="71f969d571faab3c6eed33a0dc91cb1fc24941e701ee38950dc3d8d3a72ee4c3" Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.860564 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-78c97476f4-rs98d" Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.861740 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-774974c745-8slrq" podStartSLOduration=2.600845687 podStartE2EDuration="19.86171813s" podCreationTimestamp="2025-12-10 15:58:56 +0000 UTC" firstStartedPulling="2025-12-10 15:58:57.854639932 +0000 UTC m=+763.575441109" lastFinishedPulling="2025-12-10 15:59:15.115512365 +0000 UTC m=+780.836313552" observedRunningTime="2025-12-10 15:59:15.832782825 +0000 UTC m=+781.553584002" watchObservedRunningTime="2025-12-10 15:59:15.86171813 +0000 UTC m=+781.582519307" Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.869963 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-86648f486b-8bxt2" podStartSLOduration=4.356840334 podStartE2EDuration="24.869942961s" podCreationTimestamp="2025-12-10 15:58:51 +0000 UTC" firstStartedPulling="2025-12-10 15:58:54.588983913 +0000 UTC m=+760.309785090" lastFinishedPulling="2025-12-10 15:59:15.10208654 +0000 UTC m=+780.822887717" observedRunningTime="2025-12-10 15:59:15.866661011 +0000 UTC m=+781.587462188" watchObservedRunningTime="2025-12-10 15:59:15.869942961 +0000 UTC m=+781.590744148" Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.897130 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-bqv8d" podStartSLOduration=1.786773846 podStartE2EDuration="17.897113473s" podCreationTimestamp="2025-12-10 15:58:58 +0000 UTC" firstStartedPulling="2025-12-10 15:58:59.183344737 +0000 UTC m=+764.904145914" lastFinishedPulling="2025-12-10 15:59:15.293684364 +0000 UTC m=+781.014485541" observedRunningTime="2025-12-10 15:59:15.894002657 +0000 UTC m=+781.614803844" watchObservedRunningTime="2025-12-10 15:59:15.897113473 +0000 UTC m=+781.617914650" Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.913567 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fbc7766cd-kbbs9" podStartSLOduration=4.787441819 podStartE2EDuration="24.913549743s" podCreationTimestamp="2025-12-10 15:58:51 +0000 UTC" firstStartedPulling="2025-12-10 15:58:54.98934659 +0000 UTC m=+760.710147767" lastFinishedPulling="2025-12-10 15:59:15.115454514 +0000 UTC m=+780.836255691" observedRunningTime="2025-12-10 15:59:15.91217166 +0000 UTC m=+781.632972837" watchObservedRunningTime="2025-12-10 15:59:15.913549743 +0000 UTC m=+781.634350920" Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.940638 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-78c97476f4-rs98d" podStartSLOduration=3.424964417 podStartE2EDuration="23.940620323s" podCreationTimestamp="2025-12-10 15:58:52 +0000 UTC" firstStartedPulling="2025-12-10 15:58:54.586431784 +0000 UTC m=+760.307232961" lastFinishedPulling="2025-12-10 15:59:15.10208769 +0000 UTC m=+780.822888867" observedRunningTime="2025-12-10 15:59:15.93720203 +0000 UTC m=+781.658003207" watchObservedRunningTime="2025-12-10 15:59:15.940620323 +0000 UTC m=+781.661421500" Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.951649 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.954007 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dv9k2"] Dec 10 15:59:15 crc kubenswrapper[5114]: I1210 15:59:15.961684 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dv9k2"] Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.069691 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.069809 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/64a2e767-3d9b-4af5-8889-ab3f2b41a071-registry-tls\") pod \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.069839 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/64a2e767-3d9b-4af5-8889-ab3f2b41a071-bound-sa-token\") pod \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.069982 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/64a2e767-3d9b-4af5-8889-ab3f2b41a071-registry-certificates\") pod \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.070055 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9z9h\" (UniqueName: \"kubernetes.io/projected/64a2e767-3d9b-4af5-8889-ab3f2b41a071-kube-api-access-s9z9h\") pod \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.070103 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/64a2e767-3d9b-4af5-8889-ab3f2b41a071-trusted-ca\") pod \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.070179 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/64a2e767-3d9b-4af5-8889-ab3f2b41a071-ca-trust-extracted\") pod \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.070196 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/64a2e767-3d9b-4af5-8889-ab3f2b41a071-installation-pull-secrets\") pod \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\" (UID: \"64a2e767-3d9b-4af5-8889-ab3f2b41a071\") " Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.070831 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64a2e767-3d9b-4af5-8889-ab3f2b41a071-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "64a2e767-3d9b-4af5-8889-ab3f2b41a071" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.070896 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64a2e767-3d9b-4af5-8889-ab3f2b41a071-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "64a2e767-3d9b-4af5-8889-ab3f2b41a071" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.075262 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64a2e767-3d9b-4af5-8889-ab3f2b41a071-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "64a2e767-3d9b-4af5-8889-ab3f2b41a071" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.076453 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64a2e767-3d9b-4af5-8889-ab3f2b41a071-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "64a2e767-3d9b-4af5-8889-ab3f2b41a071" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.078246 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64a2e767-3d9b-4af5-8889-ab3f2b41a071-kube-api-access-s9z9h" (OuterVolumeSpecName: "kube-api-access-s9z9h") pod "64a2e767-3d9b-4af5-8889-ab3f2b41a071" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071"). InnerVolumeSpecName "kube-api-access-s9z9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.084916 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "64a2e767-3d9b-4af5-8889-ab3f2b41a071" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.090083 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64a2e767-3d9b-4af5-8889-ab3f2b41a071-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "64a2e767-3d9b-4af5-8889-ab3f2b41a071" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.097229 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64a2e767-3d9b-4af5-8889-ab3f2b41a071-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "64a2e767-3d9b-4af5-8889-ab3f2b41a071" (UID: "64a2e767-3d9b-4af5-8889-ab3f2b41a071"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.171778 5114 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/64a2e767-3d9b-4af5-8889-ab3f2b41a071-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.171837 5114 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/64a2e767-3d9b-4af5-8889-ab3f2b41a071-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.171853 5114 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/64a2e767-3d9b-4af5-8889-ab3f2b41a071-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.171864 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s9z9h\" (UniqueName: \"kubernetes.io/projected/64a2e767-3d9b-4af5-8889-ab3f2b41a071-kube-api-access-s9z9h\") on node \"crc\" DevicePath \"\"" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.171875 5114 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/64a2e767-3d9b-4af5-8889-ab3f2b41a071-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.171887 5114 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/64a2e767-3d9b-4af5-8889-ab3f2b41a071-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.171899 5114 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/64a2e767-3d9b-4af5-8889-ab3f2b41a071-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.591074 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6274885-0329-40d3-bfc5-b1dcb367b221" path="/var/lib/kubelet/pods/a6274885-0329-40d3-bfc5-b1dcb367b221/volumes" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.606832 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.607424 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a6274885-0329-40d3-bfc5-b1dcb367b221" containerName="registry-server" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.607439 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6274885-0329-40d3-bfc5-b1dcb367b221" containerName="registry-server" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.607462 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a6274885-0329-40d3-bfc5-b1dcb367b221" containerName="extract-utilities" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.607468 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6274885-0329-40d3-bfc5-b1dcb367b221" containerName="extract-utilities" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.607474 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="64a2e767-3d9b-4af5-8889-ab3f2b41a071" containerName="registry" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.607480 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="64a2e767-3d9b-4af5-8889-ab3f2b41a071" containerName="registry" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.607494 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a6274885-0329-40d3-bfc5-b1dcb367b221" containerName="extract-content" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.607501 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6274885-0329-40d3-bfc5-b1dcb367b221" containerName="extract-content" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.607637 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="64a2e767-3d9b-4af5-8889-ab3f2b41a071" containerName="registry" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.607650 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="a6274885-0329-40d3-bfc5-b1dcb367b221" containerName="registry-server" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.638328 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.638903 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.640788 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.640871 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.641302 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-cknv4\"" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.645788 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.645872 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.646091 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.646519 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.646696 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.646848 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.680947 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.681001 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.681214 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.681245 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.681263 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.681304 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1afb4e68-a57e-4ba5-945b-eba6ad03011c-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.681337 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.681386 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.681415 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.681453 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/1afb4e68-a57e-4ba5-945b-eba6ad03011c-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.681482 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.681507 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.681526 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.681543 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.681561 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.782702 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.782752 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.782784 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.782815 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/1afb4e68-a57e-4ba5-945b-eba6ad03011c-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.782843 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.782864 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.782882 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.782899 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.782918 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.782951 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.782970 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.782993 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.783019 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.783035 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.783050 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1afb4e68-a57e-4ba5-945b-eba6ad03011c-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.783743 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1afb4e68-a57e-4ba5-945b-eba6ad03011c-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.787426 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.788473 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.789853 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.790047 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.790085 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.793899 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.795143 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.797991 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.798437 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.801967 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.808178 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.808615 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/1afb4e68-a57e-4ba5-945b-eba6ad03011c-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.809019 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.809226 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/1afb4e68-a57e-4ba5-945b-eba6ad03011c-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"1afb4e68-a57e-4ba5-945b-eba6ad03011c\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.812595 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" event={"ID":"64a2e767-3d9b-4af5-8889-ab3f2b41a071","Type":"ContainerDied","Data":"64126368dab96d8061bec79c4c3444ce34645d016014fe506e405fd0f9e6f281"} Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.812655 5114 scope.go:117] "RemoveContainer" containerID="b5c085a6a942c7a987a05a5ea8dd9853f7b4cb2bb9e7eca8e3e8d0dd120285ac" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.813082 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-2tbm6" Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.846813 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-2tbm6"] Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.851762 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-2tbm6"] Dec 10 15:59:16 crc kubenswrapper[5114]: I1210 15:59:16.958296 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:17 crc kubenswrapper[5114]: I1210 15:59:17.185431 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 10 15:59:17 crc kubenswrapper[5114]: I1210 15:59:17.827144 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"1afb4e68-a57e-4ba5-945b-eba6ad03011c","Type":"ContainerStarted","Data":"45763a4711072a5378688d20b2ffb176a8a9be2ae4b56aa03aa87c4021e1865e"} Dec 10 15:59:18 crc kubenswrapper[5114]: I1210 15:59:18.579441 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64a2e767-3d9b-4af5-8889-ab3f2b41a071" path="/var/lib/kubelet/pods/64a2e767-3d9b-4af5-8889-ab3f2b41a071/volumes" Dec 10 15:59:23 crc kubenswrapper[5114]: I1210 15:59:23.886417 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-cv9zn" event={"ID":"4ee10cec-658b-4304-aab2-66e77834a6e8","Type":"ContainerStarted","Data":"bb86f671ce0387dad8398566c3e6702494b261deb9cf0337cada3870229ade1d"} Dec 10 15:59:23 crc kubenswrapper[5114]: I1210 15:59:23.914352 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-cv9zn" podStartSLOduration=4.163531624 podStartE2EDuration="11.914328255s" podCreationTimestamp="2025-12-10 15:59:12 +0000 UTC" firstStartedPulling="2025-12-10 15:59:15.607843533 +0000 UTC m=+781.328644720" lastFinishedPulling="2025-12-10 15:59:23.358640174 +0000 UTC m=+789.079441351" observedRunningTime="2025-12-10 15:59:23.906556326 +0000 UTC m=+789.627357513" watchObservedRunningTime="2025-12-10 15:59:23.914328255 +0000 UTC m=+789.635129432" Dec 10 15:59:27 crc kubenswrapper[5114]: I1210 15:59:27.709253 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-zvg6n"] Dec 10 15:59:28 crc kubenswrapper[5114]: I1210 15:59:28.745896 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-zvg6n"] Dec 10 15:59:28 crc kubenswrapper[5114]: I1210 15:59:28.745938 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-c66hg"] Dec 10 15:59:28 crc kubenswrapper[5114]: I1210 15:59:28.746052 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-zvg6n" Dec 10 15:59:28 crc kubenswrapper[5114]: I1210 15:59:28.748579 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Dec 10 15:59:28 crc kubenswrapper[5114]: I1210 15:59:28.754106 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-nsjtn\"" Dec 10 15:59:28 crc kubenswrapper[5114]: I1210 15:59:28.763800 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Dec 10 15:59:28 crc kubenswrapper[5114]: I1210 15:59:28.863036 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f9j4\" (UniqueName: \"kubernetes.io/projected/4ebbf3dc-20ca-4a5f-bbf1-9f1d90b8c25f-kube-api-access-2f9j4\") pod \"cert-manager-webhook-7894b5b9b4-zvg6n\" (UID: \"4ebbf3dc-20ca-4a5f-bbf1-9f1d90b8c25f\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-zvg6n" Dec 10 15:59:28 crc kubenswrapper[5114]: I1210 15:59:28.863111 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4ebbf3dc-20ca-4a5f-bbf1-9f1d90b8c25f-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-zvg6n\" (UID: \"4ebbf3dc-20ca-4a5f-bbf1-9f1d90b8c25f\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-zvg6n" Dec 10 15:59:28 crc kubenswrapper[5114]: I1210 15:59:28.964534 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4ebbf3dc-20ca-4a5f-bbf1-9f1d90b8c25f-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-zvg6n\" (UID: \"4ebbf3dc-20ca-4a5f-bbf1-9f1d90b8c25f\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-zvg6n" Dec 10 15:59:28 crc kubenswrapper[5114]: I1210 15:59:28.964633 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2f9j4\" (UniqueName: \"kubernetes.io/projected/4ebbf3dc-20ca-4a5f-bbf1-9f1d90b8c25f-kube-api-access-2f9j4\") pod \"cert-manager-webhook-7894b5b9b4-zvg6n\" (UID: \"4ebbf3dc-20ca-4a5f-bbf1-9f1d90b8c25f\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-zvg6n" Dec 10 15:59:28 crc kubenswrapper[5114]: I1210 15:59:28.986385 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f9j4\" (UniqueName: \"kubernetes.io/projected/4ebbf3dc-20ca-4a5f-bbf1-9f1d90b8c25f-kube-api-access-2f9j4\") pod \"cert-manager-webhook-7894b5b9b4-zvg6n\" (UID: \"4ebbf3dc-20ca-4a5f-bbf1-9f1d90b8c25f\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-zvg6n" Dec 10 15:59:28 crc kubenswrapper[5114]: I1210 15:59:28.990744 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4ebbf3dc-20ca-4a5f-bbf1-9f1d90b8c25f-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-zvg6n\" (UID: \"4ebbf3dc-20ca-4a5f-bbf1-9f1d90b8c25f\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-zvg6n" Dec 10 15:59:29 crc kubenswrapper[5114]: I1210 15:59:29.066608 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-zvg6n" Dec 10 15:59:30 crc kubenswrapper[5114]: I1210 15:59:30.482227 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-c66hg" Dec 10 15:59:30 crc kubenswrapper[5114]: I1210 15:59:30.485655 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-lmddd\"" Dec 10 15:59:30 crc kubenswrapper[5114]: I1210 15:59:30.491290 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-68bdb49cbf-t4tzm" Dec 10 15:59:30 crc kubenswrapper[5114]: I1210 15:59:30.491690 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-c66hg"] Dec 10 15:59:30 crc kubenswrapper[5114]: I1210 15:59:30.584271 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/09f0762e-68d4-41e7-b255-cf8fd202d85d-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-c66hg\" (UID: \"09f0762e-68d4-41e7-b255-cf8fd202d85d\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-c66hg" Dec 10 15:59:30 crc kubenswrapper[5114]: I1210 15:59:30.584724 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d2xj\" (UniqueName: \"kubernetes.io/projected/09f0762e-68d4-41e7-b255-cf8fd202d85d-kube-api-access-9d2xj\") pod \"cert-manager-cainjector-7dbf76d5c8-c66hg\" (UID: \"09f0762e-68d4-41e7-b255-cf8fd202d85d\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-c66hg" Dec 10 15:59:30 crc kubenswrapper[5114]: I1210 15:59:30.686483 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9d2xj\" (UniqueName: \"kubernetes.io/projected/09f0762e-68d4-41e7-b255-cf8fd202d85d-kube-api-access-9d2xj\") pod \"cert-manager-cainjector-7dbf76d5c8-c66hg\" (UID: \"09f0762e-68d4-41e7-b255-cf8fd202d85d\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-c66hg" Dec 10 15:59:30 crc kubenswrapper[5114]: I1210 15:59:30.686547 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/09f0762e-68d4-41e7-b255-cf8fd202d85d-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-c66hg\" (UID: \"09f0762e-68d4-41e7-b255-cf8fd202d85d\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-c66hg" Dec 10 15:59:30 crc kubenswrapper[5114]: I1210 15:59:30.705548 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/09f0762e-68d4-41e7-b255-cf8fd202d85d-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-c66hg\" (UID: \"09f0762e-68d4-41e7-b255-cf8fd202d85d\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-c66hg" Dec 10 15:59:30 crc kubenswrapper[5114]: I1210 15:59:30.711687 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d2xj\" (UniqueName: \"kubernetes.io/projected/09f0762e-68d4-41e7-b255-cf8fd202d85d-kube-api-access-9d2xj\") pod \"cert-manager-cainjector-7dbf76d5c8-c66hg\" (UID: \"09f0762e-68d4-41e7-b255-cf8fd202d85d\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-c66hg" Dec 10 15:59:30 crc kubenswrapper[5114]: I1210 15:59:30.804563 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-c66hg" Dec 10 15:59:40 crc kubenswrapper[5114]: I1210 15:59:40.183837 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-zvg6n"] Dec 10 15:59:40 crc kubenswrapper[5114]: I1210 15:59:40.196529 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858d87f86b-ffn5t"] Dec 10 15:59:40 crc kubenswrapper[5114]: I1210 15:59:40.203309 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-ffn5t" Dec 10 15:59:40 crc kubenswrapper[5114]: I1210 15:59:40.206146 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-x54hr\"" Dec 10 15:59:40 crc kubenswrapper[5114]: I1210 15:59:40.208867 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-ffn5t"] Dec 10 15:59:40 crc kubenswrapper[5114]: I1210 15:59:40.320226 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84c25b1c-374c-44b4-9259-577355ba9a53-bound-sa-token\") pod \"cert-manager-858d87f86b-ffn5t\" (UID: \"84c25b1c-374c-44b4-9259-577355ba9a53\") " pod="cert-manager/cert-manager-858d87f86b-ffn5t" Dec 10 15:59:40 crc kubenswrapper[5114]: I1210 15:59:40.320326 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz7jn\" (UniqueName: \"kubernetes.io/projected/84c25b1c-374c-44b4-9259-577355ba9a53-kube-api-access-zz7jn\") pod \"cert-manager-858d87f86b-ffn5t\" (UID: \"84c25b1c-374c-44b4-9259-577355ba9a53\") " pod="cert-manager/cert-manager-858d87f86b-ffn5t" Dec 10 15:59:40 crc kubenswrapper[5114]: I1210 15:59:40.421834 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zz7jn\" (UniqueName: \"kubernetes.io/projected/84c25b1c-374c-44b4-9259-577355ba9a53-kube-api-access-zz7jn\") pod \"cert-manager-858d87f86b-ffn5t\" (UID: \"84c25b1c-374c-44b4-9259-577355ba9a53\") " pod="cert-manager/cert-manager-858d87f86b-ffn5t" Dec 10 15:59:40 crc kubenswrapper[5114]: I1210 15:59:40.422245 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84c25b1c-374c-44b4-9259-577355ba9a53-bound-sa-token\") pod \"cert-manager-858d87f86b-ffn5t\" (UID: \"84c25b1c-374c-44b4-9259-577355ba9a53\") " pod="cert-manager/cert-manager-858d87f86b-ffn5t" Dec 10 15:59:40 crc kubenswrapper[5114]: I1210 15:59:40.443114 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-c66hg"] Dec 10 15:59:40 crc kubenswrapper[5114]: I1210 15:59:40.444220 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84c25b1c-374c-44b4-9259-577355ba9a53-bound-sa-token\") pod \"cert-manager-858d87f86b-ffn5t\" (UID: \"84c25b1c-374c-44b4-9259-577355ba9a53\") " pod="cert-manager/cert-manager-858d87f86b-ffn5t" Dec 10 15:59:40 crc kubenswrapper[5114]: I1210 15:59:40.444337 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz7jn\" (UniqueName: \"kubernetes.io/projected/84c25b1c-374c-44b4-9259-577355ba9a53-kube-api-access-zz7jn\") pod \"cert-manager-858d87f86b-ffn5t\" (UID: \"84c25b1c-374c-44b4-9259-577355ba9a53\") " pod="cert-manager/cert-manager-858d87f86b-ffn5t" Dec 10 15:59:40 crc kubenswrapper[5114]: W1210 15:59:40.446420 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09f0762e_68d4_41e7_b255_cf8fd202d85d.slice/crio-95512368e7c2b2dd0dfc08b3da6173f3ca3ec382f80adeccec3ca3273208de02 WatchSource:0}: Error finding container 95512368e7c2b2dd0dfc08b3da6173f3ca3ec382f80adeccec3ca3273208de02: Status 404 returned error can't find the container with id 95512368e7c2b2dd0dfc08b3da6173f3ca3ec382f80adeccec3ca3273208de02 Dec 10 15:59:40 crc kubenswrapper[5114]: I1210 15:59:40.557317 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-ffn5t" Dec 10 15:59:40 crc kubenswrapper[5114]: I1210 15:59:40.774405 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-ffn5t"] Dec 10 15:59:40 crc kubenswrapper[5114]: W1210 15:59:40.778794 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84c25b1c_374c_44b4_9259_577355ba9a53.slice/crio-cfbbb1a65199b5647399e3e8bf7a40e87678f32b677b2ee95ac64c5492f558c1 WatchSource:0}: Error finding container cfbbb1a65199b5647399e3e8bf7a40e87678f32b677b2ee95ac64c5492f558c1: Status 404 returned error can't find the container with id cfbbb1a65199b5647399e3e8bf7a40e87678f32b677b2ee95ac64c5492f558c1 Dec 10 15:59:41 crc kubenswrapper[5114]: I1210 15:59:41.010323 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"1afb4e68-a57e-4ba5-945b-eba6ad03011c","Type":"ContainerStarted","Data":"fa2383d99465968c05d058988fb2dd4ea1dab68fc796c5b0e25daf0c400177ea"} Dec 10 15:59:41 crc kubenswrapper[5114]: I1210 15:59:41.011697 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-c66hg" event={"ID":"09f0762e-68d4-41e7-b255-cf8fd202d85d","Type":"ContainerStarted","Data":"95512368e7c2b2dd0dfc08b3da6173f3ca3ec382f80adeccec3ca3273208de02"} Dec 10 15:59:41 crc kubenswrapper[5114]: I1210 15:59:41.013400 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-ffn5t" event={"ID":"84c25b1c-374c-44b4-9259-577355ba9a53","Type":"ContainerStarted","Data":"cfbbb1a65199b5647399e3e8bf7a40e87678f32b677b2ee95ac64c5492f558c1"} Dec 10 15:59:41 crc kubenswrapper[5114]: I1210 15:59:41.014726 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-zvg6n" event={"ID":"4ebbf3dc-20ca-4a5f-bbf1-9f1d90b8c25f","Type":"ContainerStarted","Data":"9a6e0893346d006995aefb0ce24b2b638f52be0e491ca6ec27fe69858e37c5d7"} Dec 10 15:59:41 crc kubenswrapper[5114]: I1210 15:59:41.200022 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 10 15:59:41 crc kubenswrapper[5114]: I1210 15:59:41.231113 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 10 15:59:43 crc kubenswrapper[5114]: I1210 15:59:43.029072 5114 generic.go:358] "Generic (PLEG): container finished" podID="1afb4e68-a57e-4ba5-945b-eba6ad03011c" containerID="fa2383d99465968c05d058988fb2dd4ea1dab68fc796c5b0e25daf0c400177ea" exitCode=0 Dec 10 15:59:43 crc kubenswrapper[5114]: I1210 15:59:43.029182 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"1afb4e68-a57e-4ba5-945b-eba6ad03011c","Type":"ContainerDied","Data":"fa2383d99465968c05d058988fb2dd4ea1dab68fc796c5b0e25daf0c400177ea"} Dec 10 15:59:44 crc kubenswrapper[5114]: I1210 15:59:44.035044 5114 generic.go:358] "Generic (PLEG): container finished" podID="1afb4e68-a57e-4ba5-945b-eba6ad03011c" containerID="e65c9fcdd45ddfae48d2a9bbf46fdd5467ba2f5866aafd65259d6d7e9a40c018" exitCode=0 Dec 10 15:59:44 crc kubenswrapper[5114]: I1210 15:59:44.035429 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"1afb4e68-a57e-4ba5-945b-eba6ad03011c","Type":"ContainerDied","Data":"e65c9fcdd45ddfae48d2a9bbf46fdd5467ba2f5866aafd65259d6d7e9a40c018"} Dec 10 15:59:47 crc kubenswrapper[5114]: I1210 15:59:47.060207 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"1afb4e68-a57e-4ba5-945b-eba6ad03011c","Type":"ContainerStarted","Data":"09849f4bb44d45fadbb3f89ff5fa868221fa37baf4a0d0199c6054e35e72291e"} Dec 10 15:59:47 crc kubenswrapper[5114]: I1210 15:59:47.060979 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 10 15:59:47 crc kubenswrapper[5114]: I1210 15:59:47.099008 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=8.140852545 podStartE2EDuration="31.098990645s" podCreationTimestamp="2025-12-10 15:59:16 +0000 UTC" firstStartedPulling="2025-12-10 15:59:17.180561528 +0000 UTC m=+782.901362705" lastFinishedPulling="2025-12-10 15:59:40.138699628 +0000 UTC m=+805.859500805" observedRunningTime="2025-12-10 15:59:47.09796886 +0000 UTC m=+812.818770037" watchObservedRunningTime="2025-12-10 15:59:47.098990645 +0000 UTC m=+812.819791822" Dec 10 15:59:47 crc kubenswrapper[5114]: I1210 15:59:47.790081 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.000435 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.000800 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.006038 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-sys-config\"" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.006111 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-ca\"" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.014358 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-rfsxx\"" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.019316 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-global-ca\"" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.067306 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-c66hg" event={"ID":"09f0762e-68d4-41e7-b255-cf8fd202d85d","Type":"ContainerStarted","Data":"fe539ab51f4e5f7a6dbe62241663003c9e7c938fed666996b58f2f0e4e4578e6"} Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.068822 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-ffn5t" event={"ID":"84c25b1c-374c-44b4-9259-577355ba9a53","Type":"ContainerStarted","Data":"5ca0f95b6a50e730e4935b2b2a7be3ef4fe9b1efce33eabd5d4f555333c6d9ac"} Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.070167 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-zvg6n" event={"ID":"4ebbf3dc-20ca-4a5f-bbf1-9f1d90b8c25f","Type":"ContainerStarted","Data":"d853013258ccfc2d259bd18121e11bbd37fd9d252a726c7e9fa5baf1fc47cab2"} Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.070245 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-zvg6n" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.083096 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-c66hg" podStartSLOduration=13.565080186 podStartE2EDuration="20.083075455s" podCreationTimestamp="2025-12-10 15:59:28 +0000 UTC" firstStartedPulling="2025-12-10 15:59:40.449488031 +0000 UTC m=+806.170289208" lastFinishedPulling="2025-12-10 15:59:46.9674833 +0000 UTC m=+812.688284477" observedRunningTime="2025-12-10 15:59:48.082118402 +0000 UTC m=+813.802919599" watchObservedRunningTime="2025-12-10 15:59:48.083075455 +0000 UTC m=+813.803876632" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.107641 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858d87f86b-ffn5t" podStartSLOduration=1.932416167 podStartE2EDuration="8.107624483s" podCreationTimestamp="2025-12-10 15:59:40 +0000 UTC" firstStartedPulling="2025-12-10 15:59:40.780266581 +0000 UTC m=+806.501067758" lastFinishedPulling="2025-12-10 15:59:46.955474887 +0000 UTC m=+812.676276074" observedRunningTime="2025-12-10 15:59:48.103362139 +0000 UTC m=+813.824163316" watchObservedRunningTime="2025-12-10 15:59:48.107624483 +0000 UTC m=+813.828425660" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.123627 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8bnb\" (UniqueName: \"kubernetes.io/projected/3bf74466-377f-4b7e-a633-841867898219-kube-api-access-f8bnb\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.123681 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-rfsxx-pull\" (UniqueName: \"kubernetes.io/secret/3bf74466-377f-4b7e-a633-841867898219-builder-dockercfg-rfsxx-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.123701 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3bf74466-377f-4b7e-a633-841867898219-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.123718 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-rfsxx-push\" (UniqueName: \"kubernetes.io/secret/3bf74466-377f-4b7e-a633-841867898219-builder-dockercfg-rfsxx-push\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.123884 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3bf74466-377f-4b7e-a633-841867898219-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.123916 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3bf74466-377f-4b7e-a633-841867898219-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.123944 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3bf74466-377f-4b7e-a633-841867898219-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.124011 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3bf74466-377f-4b7e-a633-841867898219-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.124096 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3bf74466-377f-4b7e-a633-841867898219-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.124136 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3bf74466-377f-4b7e-a633-841867898219-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.124173 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3bf74466-377f-4b7e-a633-841867898219-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.124242 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3bf74466-377f-4b7e-a633-841867898219-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.225553 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-rfsxx-pull\" (UniqueName: \"kubernetes.io/secret/3bf74466-377f-4b7e-a633-841867898219-builder-dockercfg-rfsxx-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.225598 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3bf74466-377f-4b7e-a633-841867898219-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.225616 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-rfsxx-push\" (UniqueName: \"kubernetes.io/secret/3bf74466-377f-4b7e-a633-841867898219-builder-dockercfg-rfsxx-push\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.225661 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3bf74466-377f-4b7e-a633-841867898219-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.225676 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3bf74466-377f-4b7e-a633-841867898219-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.226253 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3bf74466-377f-4b7e-a633-841867898219-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.226491 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3bf74466-377f-4b7e-a633-841867898219-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.226558 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3bf74466-377f-4b7e-a633-841867898219-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.226657 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3bf74466-377f-4b7e-a633-841867898219-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.226730 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3bf74466-377f-4b7e-a633-841867898219-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.226757 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3bf74466-377f-4b7e-a633-841867898219-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.226807 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3bf74466-377f-4b7e-a633-841867898219-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.226828 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3bf74466-377f-4b7e-a633-841867898219-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.226854 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3bf74466-377f-4b7e-a633-841867898219-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.226981 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3bf74466-377f-4b7e-a633-841867898219-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.226976 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f8bnb\" (UniqueName: \"kubernetes.io/projected/3bf74466-377f-4b7e-a633-841867898219-kube-api-access-f8bnb\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.227052 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3bf74466-377f-4b7e-a633-841867898219-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.227001 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3bf74466-377f-4b7e-a633-841867898219-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.227356 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3bf74466-377f-4b7e-a633-841867898219-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.227453 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3bf74466-377f-4b7e-a633-841867898219-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.227590 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3bf74466-377f-4b7e-a633-841867898219-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.232960 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-rfsxx-pull\" (UniqueName: \"kubernetes.io/secret/3bf74466-377f-4b7e-a633-841867898219-builder-dockercfg-rfsxx-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.232972 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-rfsxx-push\" (UniqueName: \"kubernetes.io/secret/3bf74466-377f-4b7e-a633-841867898219-builder-dockercfg-rfsxx-push\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.246956 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8bnb\" (UniqueName: \"kubernetes.io/projected/3bf74466-377f-4b7e-a633-841867898219-kube-api-access-f8bnb\") pod \"service-telemetry-operator-1-build\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.319487 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.746694 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-7894b5b9b4-zvg6n" podStartSLOduration=15.019838687 podStartE2EDuration="21.746672975s" podCreationTimestamp="2025-12-10 15:59:27 +0000 UTC" firstStartedPulling="2025-12-10 15:59:40.199392147 +0000 UTC m=+805.920193324" lastFinishedPulling="2025-12-10 15:59:46.926226435 +0000 UTC m=+812.647027612" observedRunningTime="2025-12-10 15:59:48.131139416 +0000 UTC m=+813.851940613" watchObservedRunningTime="2025-12-10 15:59:48.746672975 +0000 UTC m=+814.467474152" Dec 10 15:59:48 crc kubenswrapper[5114]: I1210 15:59:48.748158 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 10 15:59:49 crc kubenswrapper[5114]: I1210 15:59:49.087111 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"3bf74466-377f-4b7e-a633-841867898219","Type":"ContainerStarted","Data":"d84dcbe34d5bb74821e87d8f8f4e8e7c85df26833f9be918369f0d860119afbe"} Dec 10 15:59:54 crc kubenswrapper[5114]: I1210 15:59:54.096052 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-zvg6n" Dec 10 15:59:57 crc kubenswrapper[5114]: I1210 15:59:57.138452 5114 generic.go:358] "Generic (PLEG): container finished" podID="3bf74466-377f-4b7e-a633-841867898219" containerID="fa3ece38cc1d615a4eb9ff4c791b8912ad856345223f8e8eee3aef2dd9c23992" exitCode=0 Dec 10 15:59:57 crc kubenswrapper[5114]: I1210 15:59:57.138516 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"3bf74466-377f-4b7e-a633-841867898219","Type":"ContainerDied","Data":"fa3ece38cc1d615a4eb9ff4c791b8912ad856345223f8e8eee3aef2dd9c23992"} Dec 10 15:59:58 crc kubenswrapper[5114]: I1210 15:59:58.147809 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"3bf74466-377f-4b7e-a633-841867898219","Type":"ContainerStarted","Data":"6ffd0cf55290976e049d9719537771b607e586fd454d3277e7d1e07af6ecf317"} Dec 10 15:59:58 crc kubenswrapper[5114]: I1210 15:59:58.153760 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="1afb4e68-a57e-4ba5-945b-eba6ad03011c" containerName="elasticsearch" probeResult="failure" output=< Dec 10 15:59:58 crc kubenswrapper[5114]: {"timestamp": "2025-12-10T15:59:58+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 10 15:59:58 crc kubenswrapper[5114]: > Dec 10 15:59:58 crc kubenswrapper[5114]: I1210 15:59:58.164601 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 10 15:59:58 crc kubenswrapper[5114]: I1210 15:59:58.185285 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-1-build" podStartSLOduration=4.062649281 podStartE2EDuration="11.185249863s" podCreationTimestamp="2025-12-10 15:59:47 +0000 UTC" firstStartedPulling="2025-12-10 15:59:48.756913065 +0000 UTC m=+814.477714242" lastFinishedPulling="2025-12-10 15:59:55.879513657 +0000 UTC m=+821.600314824" observedRunningTime="2025-12-10 15:59:58.183704495 +0000 UTC m=+823.904505672" watchObservedRunningTime="2025-12-10 15:59:58.185249863 +0000 UTC m=+823.906051040" Dec 10 15:59:59 crc kubenswrapper[5114]: I1210 15:59:59.813137 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.428400 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.428639 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="3bf74466-377f-4b7e-a633-841867898219" containerName="docker-build" containerID="cri-o://6ffd0cf55290976e049d9719537771b607e586fd454d3277e7d1e07af6ecf317" gracePeriod=30 Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.429193 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.431470 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-sys-config\"" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.432255 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-global-ca\"" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.432466 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-ca\"" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.438676 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423040-kv4c4"] Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.604138 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.604183 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.604222 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.604241 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.604267 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.604351 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.604401 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.604429 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-rfsxx-push\" (UniqueName: \"kubernetes.io/secret/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-builder-dockercfg-rfsxx-push\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.604457 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.604487 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.604503 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lknzh\" (UniqueName: \"kubernetes.io/projected/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-kube-api-access-lknzh\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.604549 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-rfsxx-pull\" (UniqueName: \"kubernetes.io/secret/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-builder-dockercfg-rfsxx-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.611443 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423040-kv4c4"] Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.611600 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-kv4c4" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.613761 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.614544 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.705670 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.705736 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.705769 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lknzh\" (UniqueName: \"kubernetes.io/projected/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-kube-api-access-lknzh\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.705810 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-rfsxx-pull\" (UniqueName: \"kubernetes.io/secret/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-builder-dockercfg-rfsxx-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.705877 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.705901 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.705904 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.705990 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.706048 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.706084 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.706393 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.706531 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.707013 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.707614 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.707761 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.706132 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.707976 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.707997 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.708015 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-rfsxx-push\" (UniqueName: \"kubernetes.io/secret/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-builder-dockercfg-rfsxx-push\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.708299 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.708441 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.713008 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-rfsxx-push\" (UniqueName: \"kubernetes.io/secret/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-builder-dockercfg-rfsxx-push\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.713346 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-rfsxx-pull\" (UniqueName: \"kubernetes.io/secret/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-builder-dockercfg-rfsxx-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.724713 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lknzh\" (UniqueName: \"kubernetes.io/projected/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-kube-api-access-lknzh\") pod \"service-telemetry-operator-2-build\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.746698 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.809369 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5cc7d8b-cb39-41b8-8234-76582d080833-config-volume\") pod \"collect-profiles-29423040-kv4c4\" (UID: \"f5cc7d8b-cb39-41b8-8234-76582d080833\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-kv4c4" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.809937 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7bsj\" (UniqueName: \"kubernetes.io/projected/f5cc7d8b-cb39-41b8-8234-76582d080833-kube-api-access-f7bsj\") pod \"collect-profiles-29423040-kv4c4\" (UID: \"f5cc7d8b-cb39-41b8-8234-76582d080833\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-kv4c4" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.810019 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5cc7d8b-cb39-41b8-8234-76582d080833-secret-volume\") pod \"collect-profiles-29423040-kv4c4\" (UID: \"f5cc7d8b-cb39-41b8-8234-76582d080833\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-kv4c4" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.911899 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5cc7d8b-cb39-41b8-8234-76582d080833-config-volume\") pod \"collect-profiles-29423040-kv4c4\" (UID: \"f5cc7d8b-cb39-41b8-8234-76582d080833\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-kv4c4" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.912066 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f7bsj\" (UniqueName: \"kubernetes.io/projected/f5cc7d8b-cb39-41b8-8234-76582d080833-kube-api-access-f7bsj\") pod \"collect-profiles-29423040-kv4c4\" (UID: \"f5cc7d8b-cb39-41b8-8234-76582d080833\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-kv4c4" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.912666 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5cc7d8b-cb39-41b8-8234-76582d080833-secret-volume\") pod \"collect-profiles-29423040-kv4c4\" (UID: \"f5cc7d8b-cb39-41b8-8234-76582d080833\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-kv4c4" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.913195 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5cc7d8b-cb39-41b8-8234-76582d080833-config-volume\") pod \"collect-profiles-29423040-kv4c4\" (UID: \"f5cc7d8b-cb39-41b8-8234-76582d080833\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-kv4c4" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.918772 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5cc7d8b-cb39-41b8-8234-76582d080833-secret-volume\") pod \"collect-profiles-29423040-kv4c4\" (UID: \"f5cc7d8b-cb39-41b8-8234-76582d080833\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-kv4c4" Dec 10 16:00:01 crc kubenswrapper[5114]: I1210 16:00:01.931781 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7bsj\" (UniqueName: \"kubernetes.io/projected/f5cc7d8b-cb39-41b8-8234-76582d080833-kube-api-access-f7bsj\") pod \"collect-profiles-29423040-kv4c4\" (UID: \"f5cc7d8b-cb39-41b8-8234-76582d080833\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-kv4c4" Dec 10 16:00:02 crc kubenswrapper[5114]: I1210 16:00:02.225589 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-kv4c4" Dec 10 16:00:02 crc kubenswrapper[5114]: I1210 16:00:02.642011 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 10 16:00:02 crc kubenswrapper[5114]: W1210 16:00:02.650428 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4138b09_fddf_4cf9_90a4_fbb62dbfdd63.slice/crio-2fb8f43a12aa3ac5ce79d5fe2138254ecba911692d5028ba6766e04d866e7c2c WatchSource:0}: Error finding container 2fb8f43a12aa3ac5ce79d5fe2138254ecba911692d5028ba6766e04d866e7c2c: Status 404 returned error can't find the container with id 2fb8f43a12aa3ac5ce79d5fe2138254ecba911692d5028ba6766e04d866e7c2c Dec 10 16:00:02 crc kubenswrapper[5114]: I1210 16:00:02.669177 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423040-kv4c4"] Dec 10 16:00:03 crc kubenswrapper[5114]: I1210 16:00:03.142193 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="1afb4e68-a57e-4ba5-945b-eba6ad03011c" containerName="elasticsearch" probeResult="failure" output=< Dec 10 16:00:03 crc kubenswrapper[5114]: {"timestamp": "2025-12-10T16:00:03+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 10 16:00:03 crc kubenswrapper[5114]: > Dec 10 16:00:03 crc kubenswrapper[5114]: I1210 16:00:03.177551 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63","Type":"ContainerStarted","Data":"2fb8f43a12aa3ac5ce79d5fe2138254ecba911692d5028ba6766e04d866e7c2c"} Dec 10 16:00:03 crc kubenswrapper[5114]: I1210 16:00:03.178646 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-kv4c4" event={"ID":"f5cc7d8b-cb39-41b8-8234-76582d080833","Type":"ContainerStarted","Data":"09dbdaf3c620c8b842fff91271ae6a3aaea502669650f0e301060dbefdb6cdf1"} Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.315333 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_3bf74466-377f-4b7e-a633-841867898219/docker-build/0.log" Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.316103 5114 generic.go:358] "Generic (PLEG): container finished" podID="3bf74466-377f-4b7e-a633-841867898219" containerID="6ffd0cf55290976e049d9719537771b607e586fd454d3277e7d1e07af6ecf317" exitCode=1 Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.316367 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"3bf74466-377f-4b7e-a633-841867898219","Type":"ContainerDied","Data":"6ffd0cf55290976e049d9719537771b607e586fd454d3277e7d1e07af6ecf317"} Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.533844 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_3bf74466-377f-4b7e-a633-841867898219/docker-build/0.log" Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.534467 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.591737 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3bf74466-377f-4b7e-a633-841867898219-container-storage-root\") pod \"3bf74466-377f-4b7e-a633-841867898219\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.591780 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3bf74466-377f-4b7e-a633-841867898219-buildworkdir\") pod \"3bf74466-377f-4b7e-a633-841867898219\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.591824 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-rfsxx-push\" (UniqueName: \"kubernetes.io/secret/3bf74466-377f-4b7e-a633-841867898219-builder-dockercfg-rfsxx-push\") pod \"3bf74466-377f-4b7e-a633-841867898219\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.591837 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3bf74466-377f-4b7e-a633-841867898219-buildcachedir\") pod \"3bf74466-377f-4b7e-a633-841867898219\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.591853 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3bf74466-377f-4b7e-a633-841867898219-build-ca-bundles\") pod \"3bf74466-377f-4b7e-a633-841867898219\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.591875 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8bnb\" (UniqueName: \"kubernetes.io/projected/3bf74466-377f-4b7e-a633-841867898219-kube-api-access-f8bnb\") pod \"3bf74466-377f-4b7e-a633-841867898219\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.591900 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-rfsxx-pull\" (UniqueName: \"kubernetes.io/secret/3bf74466-377f-4b7e-a633-841867898219-builder-dockercfg-rfsxx-pull\") pod \"3bf74466-377f-4b7e-a633-841867898219\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.591915 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3bf74466-377f-4b7e-a633-841867898219-node-pullsecrets\") pod \"3bf74466-377f-4b7e-a633-841867898219\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.591936 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3bf74466-377f-4b7e-a633-841867898219-build-proxy-ca-bundles\") pod \"3bf74466-377f-4b7e-a633-841867898219\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.591957 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3bf74466-377f-4b7e-a633-841867898219-build-system-configs\") pod \"3bf74466-377f-4b7e-a633-841867898219\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.591998 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3bf74466-377f-4b7e-a633-841867898219-container-storage-run\") pod \"3bf74466-377f-4b7e-a633-841867898219\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.592125 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3bf74466-377f-4b7e-a633-841867898219-build-blob-cache\") pod \"3bf74466-377f-4b7e-a633-841867898219\" (UID: \"3bf74466-377f-4b7e-a633-841867898219\") " Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.593079 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bf74466-377f-4b7e-a633-841867898219-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "3bf74466-377f-4b7e-a633-841867898219" (UID: "3bf74466-377f-4b7e-a633-841867898219"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.593139 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bf74466-377f-4b7e-a633-841867898219-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "3bf74466-377f-4b7e-a633-841867898219" (UID: "3bf74466-377f-4b7e-a633-841867898219"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.593321 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bf74466-377f-4b7e-a633-841867898219-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "3bf74466-377f-4b7e-a633-841867898219" (UID: "3bf74466-377f-4b7e-a633-841867898219"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.593443 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bf74466-377f-4b7e-a633-841867898219-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "3bf74466-377f-4b7e-a633-841867898219" (UID: "3bf74466-377f-4b7e-a633-841867898219"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.593702 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bf74466-377f-4b7e-a633-841867898219-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "3bf74466-377f-4b7e-a633-841867898219" (UID: "3bf74466-377f-4b7e-a633-841867898219"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.593875 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bf74466-377f-4b7e-a633-841867898219-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "3bf74466-377f-4b7e-a633-841867898219" (UID: "3bf74466-377f-4b7e-a633-841867898219"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.594228 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bf74466-377f-4b7e-a633-841867898219-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "3bf74466-377f-4b7e-a633-841867898219" (UID: "3bf74466-377f-4b7e-a633-841867898219"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.594635 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bf74466-377f-4b7e-a633-841867898219-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "3bf74466-377f-4b7e-a633-841867898219" (UID: "3bf74466-377f-4b7e-a633-841867898219"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.594817 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bf74466-377f-4b7e-a633-841867898219-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "3bf74466-377f-4b7e-a633-841867898219" (UID: "3bf74466-377f-4b7e-a633-841867898219"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.598595 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bf74466-377f-4b7e-a633-841867898219-builder-dockercfg-rfsxx-pull" (OuterVolumeSpecName: "builder-dockercfg-rfsxx-pull") pod "3bf74466-377f-4b7e-a633-841867898219" (UID: "3bf74466-377f-4b7e-a633-841867898219"). InnerVolumeSpecName "builder-dockercfg-rfsxx-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.598712 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bf74466-377f-4b7e-a633-841867898219-kube-api-access-f8bnb" (OuterVolumeSpecName: "kube-api-access-f8bnb") pod "3bf74466-377f-4b7e-a633-841867898219" (UID: "3bf74466-377f-4b7e-a633-841867898219"). InnerVolumeSpecName "kube-api-access-f8bnb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.598613 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bf74466-377f-4b7e-a633-841867898219-builder-dockercfg-rfsxx-push" (OuterVolumeSpecName: "builder-dockercfg-rfsxx-push") pod "3bf74466-377f-4b7e-a633-841867898219" (UID: "3bf74466-377f-4b7e-a633-841867898219"). InnerVolumeSpecName "builder-dockercfg-rfsxx-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.693143 5114 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-rfsxx-push\" (UniqueName: \"kubernetes.io/secret/3bf74466-377f-4b7e-a633-841867898219-builder-dockercfg-rfsxx-push\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.693411 5114 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3bf74466-377f-4b7e-a633-841867898219-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.693470 5114 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3bf74466-377f-4b7e-a633-841867898219-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.693533 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f8bnb\" (UniqueName: \"kubernetes.io/projected/3bf74466-377f-4b7e-a633-841867898219-kube-api-access-f8bnb\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.693588 5114 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-rfsxx-pull\" (UniqueName: \"kubernetes.io/secret/3bf74466-377f-4b7e-a633-841867898219-builder-dockercfg-rfsxx-pull\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.693642 5114 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3bf74466-377f-4b7e-a633-841867898219-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.693698 5114 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3bf74466-377f-4b7e-a633-841867898219-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.693753 5114 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3bf74466-377f-4b7e-a633-841867898219-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.693810 5114 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3bf74466-377f-4b7e-a633-841867898219-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.693864 5114 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3bf74466-377f-4b7e-a633-841867898219-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.693917 5114 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3bf74466-377f-4b7e-a633-841867898219-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:07 crc kubenswrapper[5114]: I1210 16:00:07.693972 5114 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3bf74466-377f-4b7e-a633-841867898219-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:08 crc kubenswrapper[5114]: I1210 16:00:08.325269 5114 generic.go:358] "Generic (PLEG): container finished" podID="f5cc7d8b-cb39-41b8-8234-76582d080833" containerID="d60ab4d39ab8790abac7e454f14eb163ae4dd8cc889f42b7c2b4444522a245db" exitCode=0 Dec 10 16:00:08 crc kubenswrapper[5114]: I1210 16:00:08.325390 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-kv4c4" event={"ID":"f5cc7d8b-cb39-41b8-8234-76582d080833","Type":"ContainerDied","Data":"d60ab4d39ab8790abac7e454f14eb163ae4dd8cc889f42b7c2b4444522a245db"} Dec 10 16:00:08 crc kubenswrapper[5114]: I1210 16:00:08.327917 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63","Type":"ContainerStarted","Data":"dc0aba58bae646fb7533803f8fbad05186fdd2c558d8b81f54fd1e12c4be310d"} Dec 10 16:00:08 crc kubenswrapper[5114]: I1210 16:00:08.330454 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_3bf74466-377f-4b7e-a633-841867898219/docker-build/0.log" Dec 10 16:00:08 crc kubenswrapper[5114]: I1210 16:00:08.330989 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"3bf74466-377f-4b7e-a633-841867898219","Type":"ContainerDied","Data":"d84dcbe34d5bb74821e87d8f8f4e8e7c85df26833f9be918369f0d860119afbe"} Dec 10 16:00:08 crc kubenswrapper[5114]: I1210 16:00:08.331045 5114 scope.go:117] "RemoveContainer" containerID="6ffd0cf55290976e049d9719537771b607e586fd454d3277e7d1e07af6ecf317" Dec 10 16:00:08 crc kubenswrapper[5114]: I1210 16:00:08.331005 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 10 16:00:08 crc kubenswrapper[5114]: I1210 16:00:08.359516 5114 scope.go:117] "RemoveContainer" containerID="fa3ece38cc1d615a4eb9ff4c791b8912ad856345223f8e8eee3aef2dd9c23992" Dec 10 16:00:08 crc kubenswrapper[5114]: I1210 16:00:08.404047 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 10 16:00:08 crc kubenswrapper[5114]: I1210 16:00:08.409646 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 10 16:00:08 crc kubenswrapper[5114]: I1210 16:00:08.411148 5114 ???:1] "http: TLS handshake error from 192.168.126.11:34910: no serving certificate available for the kubelet" Dec 10 16:00:08 crc kubenswrapper[5114]: I1210 16:00:08.577629 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bf74466-377f-4b7e-a633-841867898219" path="/var/lib/kubelet/pods/3bf74466-377f-4b7e-a633-841867898219/volumes" Dec 10 16:00:08 crc kubenswrapper[5114]: I1210 16:00:08.859818 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 10 16:00:09 crc kubenswrapper[5114]: I1210 16:00:09.436561 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 10 16:00:09 crc kubenswrapper[5114]: I1210 16:00:09.580830 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-kv4c4" Dec 10 16:00:09 crc kubenswrapper[5114]: I1210 16:00:09.620170 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5cc7d8b-cb39-41b8-8234-76582d080833-secret-volume\") pod \"f5cc7d8b-cb39-41b8-8234-76582d080833\" (UID: \"f5cc7d8b-cb39-41b8-8234-76582d080833\") " Dec 10 16:00:09 crc kubenswrapper[5114]: I1210 16:00:09.620299 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7bsj\" (UniqueName: \"kubernetes.io/projected/f5cc7d8b-cb39-41b8-8234-76582d080833-kube-api-access-f7bsj\") pod \"f5cc7d8b-cb39-41b8-8234-76582d080833\" (UID: \"f5cc7d8b-cb39-41b8-8234-76582d080833\") " Dec 10 16:00:09 crc kubenswrapper[5114]: I1210 16:00:09.620333 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5cc7d8b-cb39-41b8-8234-76582d080833-config-volume\") pod \"f5cc7d8b-cb39-41b8-8234-76582d080833\" (UID: \"f5cc7d8b-cb39-41b8-8234-76582d080833\") " Dec 10 16:00:09 crc kubenswrapper[5114]: I1210 16:00:09.621047 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5cc7d8b-cb39-41b8-8234-76582d080833-config-volume" (OuterVolumeSpecName: "config-volume") pod "f5cc7d8b-cb39-41b8-8234-76582d080833" (UID: "f5cc7d8b-cb39-41b8-8234-76582d080833"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 16:00:09 crc kubenswrapper[5114]: I1210 16:00:09.631445 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5cc7d8b-cb39-41b8-8234-76582d080833-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f5cc7d8b-cb39-41b8-8234-76582d080833" (UID: "f5cc7d8b-cb39-41b8-8234-76582d080833"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 16:00:09 crc kubenswrapper[5114]: I1210 16:00:09.633990 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5cc7d8b-cb39-41b8-8234-76582d080833-kube-api-access-f7bsj" (OuterVolumeSpecName: "kube-api-access-f7bsj") pod "f5cc7d8b-cb39-41b8-8234-76582d080833" (UID: "f5cc7d8b-cb39-41b8-8234-76582d080833"). InnerVolumeSpecName "kube-api-access-f7bsj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 16:00:09 crc kubenswrapper[5114]: I1210 16:00:09.721949 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f7bsj\" (UniqueName: \"kubernetes.io/projected/f5cc7d8b-cb39-41b8-8234-76582d080833-kube-api-access-f7bsj\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:09 crc kubenswrapper[5114]: I1210 16:00:09.721992 5114 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5cc7d8b-cb39-41b8-8234-76582d080833-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:09 crc kubenswrapper[5114]: I1210 16:00:09.722001 5114 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5cc7d8b-cb39-41b8-8234-76582d080833-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:10 crc kubenswrapper[5114]: I1210 16:00:10.347023 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-2-build" podUID="f4138b09-fddf-4cf9-90a4-fbb62dbfdd63" containerName="git-clone" containerID="cri-o://dc0aba58bae646fb7533803f8fbad05186fdd2c558d8b81f54fd1e12c4be310d" gracePeriod=30 Dec 10 16:00:10 crc kubenswrapper[5114]: I1210 16:00:10.347164 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-kv4c4" Dec 10 16:00:10 crc kubenswrapper[5114]: I1210 16:00:10.351647 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-kv4c4" event={"ID":"f5cc7d8b-cb39-41b8-8234-76582d080833","Type":"ContainerDied","Data":"09dbdaf3c620c8b842fff91271ae6a3aaea502669650f0e301060dbefdb6cdf1"} Dec 10 16:00:10 crc kubenswrapper[5114]: I1210 16:00:10.351878 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09dbdaf3c620c8b842fff91271ae6a3aaea502669650f0e301060dbefdb6cdf1" Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.232607 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_f4138b09-fddf-4cf9-90a4-fbb62dbfdd63/git-clone/0.log" Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.232940 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.343962 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-build-system-configs\") pod \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.344010 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-build-proxy-ca-bundles\") pod \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.344047 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-buildworkdir\") pod \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.344106 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-rfsxx-push\" (UniqueName: \"kubernetes.io/secret/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-builder-dockercfg-rfsxx-push\") pod \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.344155 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lknzh\" (UniqueName: \"kubernetes.io/projected/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-kube-api-access-lknzh\") pod \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.344867 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "f4138b09-fddf-4cf9-90a4-fbb62dbfdd63" (UID: "f4138b09-fddf-4cf9-90a4-fbb62dbfdd63"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.345076 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "f4138b09-fddf-4cf9-90a4-fbb62dbfdd63" (UID: "f4138b09-fddf-4cf9-90a4-fbb62dbfdd63"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.345225 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "f4138b09-fddf-4cf9-90a4-fbb62dbfdd63" (UID: "f4138b09-fddf-4cf9-90a4-fbb62dbfdd63"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.344379 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-rfsxx-pull\" (UniqueName: \"kubernetes.io/secret/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-builder-dockercfg-rfsxx-pull\") pod \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.345618 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-build-ca-bundles\") pod \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.345648 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-container-storage-run\") pod \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.345679 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-buildcachedir\") pod \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.345712 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-build-blob-cache\") pod \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.345761 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-container-storage-root\") pod \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.345817 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-node-pullsecrets\") pod \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\" (UID: \"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63\") " Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.345885 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "f4138b09-fddf-4cf9-90a4-fbb62dbfdd63" (UID: "f4138b09-fddf-4cf9-90a4-fbb62dbfdd63"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.345982 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "f4138b09-fddf-4cf9-90a4-fbb62dbfdd63" (UID: "f4138b09-fddf-4cf9-90a4-fbb62dbfdd63"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.345919 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "f4138b09-fddf-4cf9-90a4-fbb62dbfdd63" (UID: "f4138b09-fddf-4cf9-90a4-fbb62dbfdd63"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.346087 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "f4138b09-fddf-4cf9-90a4-fbb62dbfdd63" (UID: "f4138b09-fddf-4cf9-90a4-fbb62dbfdd63"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.346162 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "f4138b09-fddf-4cf9-90a4-fbb62dbfdd63" (UID: "f4138b09-fddf-4cf9-90a4-fbb62dbfdd63"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.346367 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "f4138b09-fddf-4cf9-90a4-fbb62dbfdd63" (UID: "f4138b09-fddf-4cf9-90a4-fbb62dbfdd63"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.346676 5114 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.346710 5114 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.346729 5114 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.346741 5114 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.346752 5114 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.346764 5114 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.346775 5114 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.346792 5114 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.346803 5114 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.350779 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-builder-dockercfg-rfsxx-push" (OuterVolumeSpecName: "builder-dockercfg-rfsxx-push") pod "f4138b09-fddf-4cf9-90a4-fbb62dbfdd63" (UID: "f4138b09-fddf-4cf9-90a4-fbb62dbfdd63"). InnerVolumeSpecName "builder-dockercfg-rfsxx-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.354163 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_f4138b09-fddf-4cf9-90a4-fbb62dbfdd63/git-clone/0.log" Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.354211 5114 generic.go:358] "Generic (PLEG): container finished" podID="f4138b09-fddf-4cf9-90a4-fbb62dbfdd63" containerID="dc0aba58bae646fb7533803f8fbad05186fdd2c558d8b81f54fd1e12c4be310d" exitCode=1 Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.354345 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63","Type":"ContainerDied","Data":"dc0aba58bae646fb7533803f8fbad05186fdd2c558d8b81f54fd1e12c4be310d"} Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.354365 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.354384 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"f4138b09-fddf-4cf9-90a4-fbb62dbfdd63","Type":"ContainerDied","Data":"2fb8f43a12aa3ac5ce79d5fe2138254ecba911692d5028ba6766e04d866e7c2c"} Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.354406 5114 scope.go:117] "RemoveContainer" containerID="dc0aba58bae646fb7533803f8fbad05186fdd2c558d8b81f54fd1e12c4be310d" Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.364565 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-builder-dockercfg-rfsxx-pull" (OuterVolumeSpecName: "builder-dockercfg-rfsxx-pull") pod "f4138b09-fddf-4cf9-90a4-fbb62dbfdd63" (UID: "f4138b09-fddf-4cf9-90a4-fbb62dbfdd63"). InnerVolumeSpecName "builder-dockercfg-rfsxx-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.370752 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-kube-api-access-lknzh" (OuterVolumeSpecName: "kube-api-access-lknzh") pod "f4138b09-fddf-4cf9-90a4-fbb62dbfdd63" (UID: "f4138b09-fddf-4cf9-90a4-fbb62dbfdd63"). InnerVolumeSpecName "kube-api-access-lknzh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.406095 5114 scope.go:117] "RemoveContainer" containerID="dc0aba58bae646fb7533803f8fbad05186fdd2c558d8b81f54fd1e12c4be310d" Dec 10 16:00:11 crc kubenswrapper[5114]: E1210 16:00:11.408102 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc0aba58bae646fb7533803f8fbad05186fdd2c558d8b81f54fd1e12c4be310d\": container with ID starting with dc0aba58bae646fb7533803f8fbad05186fdd2c558d8b81f54fd1e12c4be310d not found: ID does not exist" containerID="dc0aba58bae646fb7533803f8fbad05186fdd2c558d8b81f54fd1e12c4be310d" Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.408161 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc0aba58bae646fb7533803f8fbad05186fdd2c558d8b81f54fd1e12c4be310d"} err="failed to get container status \"dc0aba58bae646fb7533803f8fbad05186fdd2c558d8b81f54fd1e12c4be310d\": rpc error: code = NotFound desc = could not find container \"dc0aba58bae646fb7533803f8fbad05186fdd2c558d8b81f54fd1e12c4be310d\": container with ID starting with dc0aba58bae646fb7533803f8fbad05186fdd2c558d8b81f54fd1e12c4be310d not found: ID does not exist" Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.448493 5114 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-rfsxx-push\" (UniqueName: \"kubernetes.io/secret/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-builder-dockercfg-rfsxx-push\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.448530 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lknzh\" (UniqueName: \"kubernetes.io/projected/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-kube-api-access-lknzh\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.448539 5114 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-rfsxx-pull\" (UniqueName: \"kubernetes.io/secret/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63-builder-dockercfg-rfsxx-pull\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.685185 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 10 16:00:11 crc kubenswrapper[5114]: I1210 16:00:11.690924 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 10 16:00:12 crc kubenswrapper[5114]: I1210 16:00:12.578185 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4138b09-fddf-4cf9-90a4-fbb62dbfdd63" path="/var/lib/kubelet/pods/f4138b09-fddf-4cf9-90a4-fbb62dbfdd63/volumes" Dec 10 16:00:20 crc kubenswrapper[5114]: I1210 16:00:20.875359 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 10 16:00:20 crc kubenswrapper[5114]: I1210 16:00:20.876607 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f5cc7d8b-cb39-41b8-8234-76582d080833" containerName="collect-profiles" Dec 10 16:00:20 crc kubenswrapper[5114]: I1210 16:00:20.876623 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5cc7d8b-cb39-41b8-8234-76582d080833" containerName="collect-profiles" Dec 10 16:00:20 crc kubenswrapper[5114]: I1210 16:00:20.876643 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f4138b09-fddf-4cf9-90a4-fbb62dbfdd63" containerName="git-clone" Dec 10 16:00:20 crc kubenswrapper[5114]: I1210 16:00:20.876650 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4138b09-fddf-4cf9-90a4-fbb62dbfdd63" containerName="git-clone" Dec 10 16:00:20 crc kubenswrapper[5114]: I1210 16:00:20.876672 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3bf74466-377f-4b7e-a633-841867898219" containerName="manage-dockerfile" Dec 10 16:00:20 crc kubenswrapper[5114]: I1210 16:00:20.876680 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf74466-377f-4b7e-a633-841867898219" containerName="manage-dockerfile" Dec 10 16:00:20 crc kubenswrapper[5114]: I1210 16:00:20.876687 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3bf74466-377f-4b7e-a633-841867898219" containerName="docker-build" Dec 10 16:00:20 crc kubenswrapper[5114]: I1210 16:00:20.876693 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf74466-377f-4b7e-a633-841867898219" containerName="docker-build" Dec 10 16:00:20 crc kubenswrapper[5114]: I1210 16:00:20.876827 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="3bf74466-377f-4b7e-a633-841867898219" containerName="docker-build" Dec 10 16:00:20 crc kubenswrapper[5114]: I1210 16:00:20.876839 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="f4138b09-fddf-4cf9-90a4-fbb62dbfdd63" containerName="git-clone" Dec 10 16:00:20 crc kubenswrapper[5114]: I1210 16:00:20.876849 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="f5cc7d8b-cb39-41b8-8234-76582d080833" containerName="collect-profiles" Dec 10 16:00:20 crc kubenswrapper[5114]: I1210 16:00:20.881821 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:20 crc kubenswrapper[5114]: I1210 16:00:20.886813 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-global-ca\"" Dec 10 16:00:20 crc kubenswrapper[5114]: I1210 16:00:20.886910 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-ca\"" Dec 10 16:00:20 crc kubenswrapper[5114]: I1210 16:00:20.887136 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-rfsxx\"" Dec 10 16:00:20 crc kubenswrapper[5114]: I1210 16:00:20.887632 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-sys-config\"" Dec 10 16:00:20 crc kubenswrapper[5114]: I1210 16:00:20.893371 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 10 16:00:20 crc kubenswrapper[5114]: I1210 16:00:20.967005 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f11b555a-b0ce-4bf2-a200-623075875865-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:20 crc kubenswrapper[5114]: I1210 16:00:20.967056 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkj66\" (UniqueName: \"kubernetes.io/projected/f11b555a-b0ce-4bf2-a200-623075875865-kube-api-access-lkj66\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:20 crc kubenswrapper[5114]: I1210 16:00:20.967088 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-rfsxx-push\" (UniqueName: \"kubernetes.io/secret/f11b555a-b0ce-4bf2-a200-623075875865-builder-dockercfg-rfsxx-push\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:20 crc kubenswrapper[5114]: I1210 16:00:20.967108 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f11b555a-b0ce-4bf2-a200-623075875865-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:20 crc kubenswrapper[5114]: I1210 16:00:20.967132 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-rfsxx-pull\" (UniqueName: \"kubernetes.io/secret/f11b555a-b0ce-4bf2-a200-623075875865-builder-dockercfg-rfsxx-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:20 crc kubenswrapper[5114]: I1210 16:00:20.967163 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f11b555a-b0ce-4bf2-a200-623075875865-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:20 crc kubenswrapper[5114]: I1210 16:00:20.967180 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f11b555a-b0ce-4bf2-a200-623075875865-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:20 crc kubenswrapper[5114]: I1210 16:00:20.967200 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f11b555a-b0ce-4bf2-a200-623075875865-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:20 crc kubenswrapper[5114]: I1210 16:00:20.967221 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f11b555a-b0ce-4bf2-a200-623075875865-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:20 crc kubenswrapper[5114]: I1210 16:00:20.967259 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f11b555a-b0ce-4bf2-a200-623075875865-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:20 crc kubenswrapper[5114]: I1210 16:00:20.967315 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f11b555a-b0ce-4bf2-a200-623075875865-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:20 crc kubenswrapper[5114]: I1210 16:00:20.967399 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f11b555a-b0ce-4bf2-a200-623075875865-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:21 crc kubenswrapper[5114]: I1210 16:00:21.068873 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f11b555a-b0ce-4bf2-a200-623075875865-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:21 crc kubenswrapper[5114]: I1210 16:00:21.068925 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lkj66\" (UniqueName: \"kubernetes.io/projected/f11b555a-b0ce-4bf2-a200-623075875865-kube-api-access-lkj66\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:21 crc kubenswrapper[5114]: I1210 16:00:21.068953 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-rfsxx-push\" (UniqueName: \"kubernetes.io/secret/f11b555a-b0ce-4bf2-a200-623075875865-builder-dockercfg-rfsxx-push\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:21 crc kubenswrapper[5114]: I1210 16:00:21.068982 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f11b555a-b0ce-4bf2-a200-623075875865-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:21 crc kubenswrapper[5114]: I1210 16:00:21.069006 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-rfsxx-pull\" (UniqueName: \"kubernetes.io/secret/f11b555a-b0ce-4bf2-a200-623075875865-builder-dockercfg-rfsxx-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:21 crc kubenswrapper[5114]: I1210 16:00:21.069198 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f11b555a-b0ce-4bf2-a200-623075875865-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:21 crc kubenswrapper[5114]: I1210 16:00:21.069245 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f11b555a-b0ce-4bf2-a200-623075875865-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:21 crc kubenswrapper[5114]: I1210 16:00:21.069303 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f11b555a-b0ce-4bf2-a200-623075875865-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:21 crc kubenswrapper[5114]: I1210 16:00:21.069338 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f11b555a-b0ce-4bf2-a200-623075875865-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:21 crc kubenswrapper[5114]: I1210 16:00:21.069397 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f11b555a-b0ce-4bf2-a200-623075875865-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:21 crc kubenswrapper[5114]: I1210 16:00:21.069445 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f11b555a-b0ce-4bf2-a200-623075875865-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:21 crc kubenswrapper[5114]: I1210 16:00:21.069485 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f11b555a-b0ce-4bf2-a200-623075875865-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:21 crc kubenswrapper[5114]: I1210 16:00:21.069607 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f11b555a-b0ce-4bf2-a200-623075875865-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:21 crc kubenswrapper[5114]: I1210 16:00:21.070062 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f11b555a-b0ce-4bf2-a200-623075875865-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:21 crc kubenswrapper[5114]: I1210 16:00:21.070120 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f11b555a-b0ce-4bf2-a200-623075875865-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:21 crc kubenswrapper[5114]: I1210 16:00:21.070256 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f11b555a-b0ce-4bf2-a200-623075875865-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:21 crc kubenswrapper[5114]: I1210 16:00:21.070397 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f11b555a-b0ce-4bf2-a200-623075875865-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:21 crc kubenswrapper[5114]: I1210 16:00:21.070452 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f11b555a-b0ce-4bf2-a200-623075875865-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:21 crc kubenswrapper[5114]: I1210 16:00:21.070454 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f11b555a-b0ce-4bf2-a200-623075875865-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:21 crc kubenswrapper[5114]: I1210 16:00:21.070525 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f11b555a-b0ce-4bf2-a200-623075875865-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:21 crc kubenswrapper[5114]: I1210 16:00:21.071261 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f11b555a-b0ce-4bf2-a200-623075875865-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:21 crc kubenswrapper[5114]: I1210 16:00:21.088940 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-rfsxx-push\" (UniqueName: \"kubernetes.io/secret/f11b555a-b0ce-4bf2-a200-623075875865-builder-dockercfg-rfsxx-push\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:21 crc kubenswrapper[5114]: I1210 16:00:21.089212 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-rfsxx-pull\" (UniqueName: \"kubernetes.io/secret/f11b555a-b0ce-4bf2-a200-623075875865-builder-dockercfg-rfsxx-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:21 crc kubenswrapper[5114]: I1210 16:00:21.097055 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkj66\" (UniqueName: \"kubernetes.io/projected/f11b555a-b0ce-4bf2-a200-623075875865-kube-api-access-lkj66\") pod \"service-telemetry-operator-3-build\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:21 crc kubenswrapper[5114]: I1210 16:00:21.196961 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:21 crc kubenswrapper[5114]: I1210 16:00:21.418229 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 10 16:00:22 crc kubenswrapper[5114]: I1210 16:00:22.424858 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"f11b555a-b0ce-4bf2-a200-623075875865","Type":"ContainerStarted","Data":"d3a5dabbf6a3dfc1e766278a67ff1ab2c7b759f254b9738ca34dcf4a68c48e6f"} Dec 10 16:00:22 crc kubenswrapper[5114]: I1210 16:00:22.425174 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"f11b555a-b0ce-4bf2-a200-623075875865","Type":"ContainerStarted","Data":"ee561f743add4c6deae0ac141ad966632544ed3435b7e2717d7b8381408084a0"} Dec 10 16:00:22 crc kubenswrapper[5114]: I1210 16:00:22.477733 5114 ???:1] "http: TLS handshake error from 192.168.126.11:60036: no serving certificate available for the kubelet" Dec 10 16:00:23 crc kubenswrapper[5114]: I1210 16:00:23.507738 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 10 16:00:24 crc kubenswrapper[5114]: I1210 16:00:24.438237 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-3-build" podUID="f11b555a-b0ce-4bf2-a200-623075875865" containerName="git-clone" containerID="cri-o://d3a5dabbf6a3dfc1e766278a67ff1ab2c7b759f254b9738ca34dcf4a68c48e6f" gracePeriod=30 Dec 10 16:00:24 crc kubenswrapper[5114]: I1210 16:00:24.878756 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_f11b555a-b0ce-4bf2-a200-623075875865/git-clone/0.log" Dec 10 16:00:24 crc kubenswrapper[5114]: I1210 16:00:24.879415 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.021652 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f11b555a-b0ce-4bf2-a200-623075875865-node-pullsecrets\") pod \"f11b555a-b0ce-4bf2-a200-623075875865\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.021739 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f11b555a-b0ce-4bf2-a200-623075875865-build-proxy-ca-bundles\") pod \"f11b555a-b0ce-4bf2-a200-623075875865\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.021765 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f11b555a-b0ce-4bf2-a200-623075875865-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "f11b555a-b0ce-4bf2-a200-623075875865" (UID: "f11b555a-b0ce-4bf2-a200-623075875865"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.021777 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f11b555a-b0ce-4bf2-a200-623075875865-buildworkdir\") pod \"f11b555a-b0ce-4bf2-a200-623075875865\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.021818 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f11b555a-b0ce-4bf2-a200-623075875865-buildcachedir\") pod \"f11b555a-b0ce-4bf2-a200-623075875865\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.021862 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkj66\" (UniqueName: \"kubernetes.io/projected/f11b555a-b0ce-4bf2-a200-623075875865-kube-api-access-lkj66\") pod \"f11b555a-b0ce-4bf2-a200-623075875865\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.021913 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f11b555a-b0ce-4bf2-a200-623075875865-build-ca-bundles\") pod \"f11b555a-b0ce-4bf2-a200-623075875865\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.021939 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f11b555a-b0ce-4bf2-a200-623075875865-container-storage-root\") pod \"f11b555a-b0ce-4bf2-a200-623075875865\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.021961 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f11b555a-b0ce-4bf2-a200-623075875865-build-system-configs\") pod \"f11b555a-b0ce-4bf2-a200-623075875865\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.021952 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f11b555a-b0ce-4bf2-a200-623075875865-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "f11b555a-b0ce-4bf2-a200-623075875865" (UID: "f11b555a-b0ce-4bf2-a200-623075875865"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.022007 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f11b555a-b0ce-4bf2-a200-623075875865-build-blob-cache\") pod \"f11b555a-b0ce-4bf2-a200-623075875865\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.022027 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f11b555a-b0ce-4bf2-a200-623075875865-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "f11b555a-b0ce-4bf2-a200-623075875865" (UID: "f11b555a-b0ce-4bf2-a200-623075875865"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.022048 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f11b555a-b0ce-4bf2-a200-623075875865-container-storage-run\") pod \"f11b555a-b0ce-4bf2-a200-623075875865\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.022109 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-rfsxx-pull\" (UniqueName: \"kubernetes.io/secret/f11b555a-b0ce-4bf2-a200-623075875865-builder-dockercfg-rfsxx-pull\") pod \"f11b555a-b0ce-4bf2-a200-623075875865\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.022127 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-rfsxx-push\" (UniqueName: \"kubernetes.io/secret/f11b555a-b0ce-4bf2-a200-623075875865-builder-dockercfg-rfsxx-push\") pod \"f11b555a-b0ce-4bf2-a200-623075875865\" (UID: \"f11b555a-b0ce-4bf2-a200-623075875865\") " Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.022162 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f11b555a-b0ce-4bf2-a200-623075875865-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "f11b555a-b0ce-4bf2-a200-623075875865" (UID: "f11b555a-b0ce-4bf2-a200-623075875865"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.022230 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f11b555a-b0ce-4bf2-a200-623075875865-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "f11b555a-b0ce-4bf2-a200-623075875865" (UID: "f11b555a-b0ce-4bf2-a200-623075875865"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.022548 5114 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f11b555a-b0ce-4bf2-a200-623075875865-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.022562 5114 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f11b555a-b0ce-4bf2-a200-623075875865-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.022573 5114 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f11b555a-b0ce-4bf2-a200-623075875865-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.022581 5114 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f11b555a-b0ce-4bf2-a200-623075875865-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.022589 5114 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f11b555a-b0ce-4bf2-a200-623075875865-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.022588 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f11b555a-b0ce-4bf2-a200-623075875865-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "f11b555a-b0ce-4bf2-a200-623075875865" (UID: "f11b555a-b0ce-4bf2-a200-623075875865"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.022606 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f11b555a-b0ce-4bf2-a200-623075875865-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "f11b555a-b0ce-4bf2-a200-623075875865" (UID: "f11b555a-b0ce-4bf2-a200-623075875865"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.022628 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f11b555a-b0ce-4bf2-a200-623075875865-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "f11b555a-b0ce-4bf2-a200-623075875865" (UID: "f11b555a-b0ce-4bf2-a200-623075875865"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.022904 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f11b555a-b0ce-4bf2-a200-623075875865-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "f11b555a-b0ce-4bf2-a200-623075875865" (UID: "f11b555a-b0ce-4bf2-a200-623075875865"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.027305 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f11b555a-b0ce-4bf2-a200-623075875865-builder-dockercfg-rfsxx-push" (OuterVolumeSpecName: "builder-dockercfg-rfsxx-push") pod "f11b555a-b0ce-4bf2-a200-623075875865" (UID: "f11b555a-b0ce-4bf2-a200-623075875865"). InnerVolumeSpecName "builder-dockercfg-rfsxx-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.027425 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f11b555a-b0ce-4bf2-a200-623075875865-builder-dockercfg-rfsxx-pull" (OuterVolumeSpecName: "builder-dockercfg-rfsxx-pull") pod "f11b555a-b0ce-4bf2-a200-623075875865" (UID: "f11b555a-b0ce-4bf2-a200-623075875865"). InnerVolumeSpecName "builder-dockercfg-rfsxx-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.028479 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f11b555a-b0ce-4bf2-a200-623075875865-kube-api-access-lkj66" (OuterVolumeSpecName: "kube-api-access-lkj66") pod "f11b555a-b0ce-4bf2-a200-623075875865" (UID: "f11b555a-b0ce-4bf2-a200-623075875865"). InnerVolumeSpecName "kube-api-access-lkj66". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.123726 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lkj66\" (UniqueName: \"kubernetes.io/projected/f11b555a-b0ce-4bf2-a200-623075875865-kube-api-access-lkj66\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.123762 5114 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f11b555a-b0ce-4bf2-a200-623075875865-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.123770 5114 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f11b555a-b0ce-4bf2-a200-623075875865-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.123778 5114 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f11b555a-b0ce-4bf2-a200-623075875865-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.123786 5114 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f11b555a-b0ce-4bf2-a200-623075875865-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.123794 5114 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-rfsxx-pull\" (UniqueName: \"kubernetes.io/secret/f11b555a-b0ce-4bf2-a200-623075875865-builder-dockercfg-rfsxx-pull\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.123802 5114 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-rfsxx-push\" (UniqueName: \"kubernetes.io/secret/f11b555a-b0ce-4bf2-a200-623075875865-builder-dockercfg-rfsxx-push\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.444942 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_f11b555a-b0ce-4bf2-a200-623075875865/git-clone/0.log" Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.444989 5114 generic.go:358] "Generic (PLEG): container finished" podID="f11b555a-b0ce-4bf2-a200-623075875865" containerID="d3a5dabbf6a3dfc1e766278a67ff1ab2c7b759f254b9738ca34dcf4a68c48e6f" exitCode=1 Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.445017 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"f11b555a-b0ce-4bf2-a200-623075875865","Type":"ContainerDied","Data":"d3a5dabbf6a3dfc1e766278a67ff1ab2c7b759f254b9738ca34dcf4a68c48e6f"} Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.445046 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"f11b555a-b0ce-4bf2-a200-623075875865","Type":"ContainerDied","Data":"ee561f743add4c6deae0ac141ad966632544ed3435b7e2717d7b8381408084a0"} Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.445064 5114 scope.go:117] "RemoveContainer" containerID="d3a5dabbf6a3dfc1e766278a67ff1ab2c7b759f254b9738ca34dcf4a68c48e6f" Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.445096 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.463702 5114 scope.go:117] "RemoveContainer" containerID="d3a5dabbf6a3dfc1e766278a67ff1ab2c7b759f254b9738ca34dcf4a68c48e6f" Dec 10 16:00:25 crc kubenswrapper[5114]: E1210 16:00:25.464171 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3a5dabbf6a3dfc1e766278a67ff1ab2c7b759f254b9738ca34dcf4a68c48e6f\": container with ID starting with d3a5dabbf6a3dfc1e766278a67ff1ab2c7b759f254b9738ca34dcf4a68c48e6f not found: ID does not exist" containerID="d3a5dabbf6a3dfc1e766278a67ff1ab2c7b759f254b9738ca34dcf4a68c48e6f" Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.464244 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3a5dabbf6a3dfc1e766278a67ff1ab2c7b759f254b9738ca34dcf4a68c48e6f"} err="failed to get container status \"d3a5dabbf6a3dfc1e766278a67ff1ab2c7b759f254b9738ca34dcf4a68c48e6f\": rpc error: code = NotFound desc = could not find container \"d3a5dabbf6a3dfc1e766278a67ff1ab2c7b759f254b9738ca34dcf4a68c48e6f\": container with ID starting with d3a5dabbf6a3dfc1e766278a67ff1ab2c7b759f254b9738ca34dcf4a68c48e6f not found: ID does not exist" Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.479083 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 10 16:00:25 crc kubenswrapper[5114]: I1210 16:00:25.484697 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 10 16:00:26 crc kubenswrapper[5114]: I1210 16:00:26.576560 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f11b555a-b0ce-4bf2-a200-623075875865" path="/var/lib/kubelet/pods/f11b555a-b0ce-4bf2-a200-623075875865/volumes" Dec 10 16:00:34 crc kubenswrapper[5114]: I1210 16:00:34.920645 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 10 16:00:34 crc kubenswrapper[5114]: I1210 16:00:34.921840 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f11b555a-b0ce-4bf2-a200-623075875865" containerName="git-clone" Dec 10 16:00:34 crc kubenswrapper[5114]: I1210 16:00:34.921860 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="f11b555a-b0ce-4bf2-a200-623075875865" containerName="git-clone" Dec 10 16:00:34 crc kubenswrapper[5114]: I1210 16:00:34.921993 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="f11b555a-b0ce-4bf2-a200-623075875865" containerName="git-clone" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.170797 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.170992 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.174005 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-rfsxx\"" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.174167 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-global-ca\"" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.174028 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-sys-config\"" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.174494 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-ca\"" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.352835 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9e4486b7-a398-422f-b2fc-708035002e4c-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.352892 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9e4486b7-a398-422f-b2fc-708035002e4c-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.352925 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-rfsxx-pull\" (UniqueName: \"kubernetes.io/secret/9e4486b7-a398-422f-b2fc-708035002e4c-builder-dockercfg-rfsxx-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.352957 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9e4486b7-a398-422f-b2fc-708035002e4c-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.353000 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e4486b7-a398-422f-b2fc-708035002e4c-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.353068 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9e4486b7-a398-422f-b2fc-708035002e4c-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.353141 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-rfsxx-push\" (UniqueName: \"kubernetes.io/secret/9e4486b7-a398-422f-b2fc-708035002e4c-builder-dockercfg-rfsxx-push\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.353182 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9e4486b7-a398-422f-b2fc-708035002e4c-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.353212 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ltml\" (UniqueName: \"kubernetes.io/projected/9e4486b7-a398-422f-b2fc-708035002e4c-kube-api-access-7ltml\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.353241 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e4486b7-a398-422f-b2fc-708035002e4c-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.353263 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9e4486b7-a398-422f-b2fc-708035002e4c-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.353411 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9e4486b7-a398-422f-b2fc-708035002e4c-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.454607 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9e4486b7-a398-422f-b2fc-708035002e4c-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.454664 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9e4486b7-a398-422f-b2fc-708035002e4c-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.454692 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-rfsxx-pull\" (UniqueName: \"kubernetes.io/secret/9e4486b7-a398-422f-b2fc-708035002e4c-builder-dockercfg-rfsxx-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.454714 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9e4486b7-a398-422f-b2fc-708035002e4c-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.454735 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e4486b7-a398-422f-b2fc-708035002e4c-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.454785 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9e4486b7-a398-422f-b2fc-708035002e4c-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.454824 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-rfsxx-push\" (UniqueName: \"kubernetes.io/secret/9e4486b7-a398-422f-b2fc-708035002e4c-builder-dockercfg-rfsxx-push\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.454862 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9e4486b7-a398-422f-b2fc-708035002e4c-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.454943 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9e4486b7-a398-422f-b2fc-708035002e4c-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.454976 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9e4486b7-a398-422f-b2fc-708035002e4c-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.455021 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9e4486b7-a398-422f-b2fc-708035002e4c-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.455335 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9e4486b7-a398-422f-b2fc-708035002e4c-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.455258 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7ltml\" (UniqueName: \"kubernetes.io/projected/9e4486b7-a398-422f-b2fc-708035002e4c-kube-api-access-7ltml\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.455499 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e4486b7-a398-422f-b2fc-708035002e4c-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.455541 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9e4486b7-a398-422f-b2fc-708035002e4c-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.455591 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e4486b7-a398-422f-b2fc-708035002e4c-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.455688 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9e4486b7-a398-422f-b2fc-708035002e4c-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.455949 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9e4486b7-a398-422f-b2fc-708035002e4c-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.456230 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9e4486b7-a398-422f-b2fc-708035002e4c-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.456248 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9e4486b7-a398-422f-b2fc-708035002e4c-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.456383 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e4486b7-a398-422f-b2fc-708035002e4c-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.461108 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-rfsxx-pull\" (UniqueName: \"kubernetes.io/secret/9e4486b7-a398-422f-b2fc-708035002e4c-builder-dockercfg-rfsxx-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.464011 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-rfsxx-push\" (UniqueName: \"kubernetes.io/secret/9e4486b7-a398-422f-b2fc-708035002e4c-builder-dockercfg-rfsxx-push\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.473416 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ltml\" (UniqueName: \"kubernetes.io/projected/9e4486b7-a398-422f-b2fc-708035002e4c-kube-api-access-7ltml\") pod \"service-telemetry-operator-4-build\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.494919 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:35 crc kubenswrapper[5114]: I1210 16:00:35.924091 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 10 16:00:36 crc kubenswrapper[5114]: I1210 16:00:36.510850 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"9e4486b7-a398-422f-b2fc-708035002e4c","Type":"ContainerStarted","Data":"d2caee3b71051f8546616b86da37330842cf11deb439b862a8254ca377a52399"} Dec 10 16:00:36 crc kubenswrapper[5114]: I1210 16:00:36.511170 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"9e4486b7-a398-422f-b2fc-708035002e4c","Type":"ContainerStarted","Data":"2e2906291733078e815a94121e34b43e4102b469fe4ba9ec203719b45f3ad259"} Dec 10 16:00:36 crc kubenswrapper[5114]: I1210 16:00:36.553749 5114 ???:1] "http: TLS handshake error from 192.168.126.11:59232: no serving certificate available for the kubelet" Dec 10 16:00:37 crc kubenswrapper[5114]: I1210 16:00:37.583052 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 10 16:00:38 crc kubenswrapper[5114]: I1210 16:00:38.522658 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-4-build" podUID="9e4486b7-a398-422f-b2fc-708035002e4c" containerName="git-clone" containerID="cri-o://d2caee3b71051f8546616b86da37330842cf11deb439b862a8254ca377a52399" gracePeriod=30 Dec 10 16:00:38 crc kubenswrapper[5114]: I1210 16:00:38.929925 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_9e4486b7-a398-422f-b2fc-708035002e4c/git-clone/0.log" Dec 10 16:00:38 crc kubenswrapper[5114]: I1210 16:00:38.930094 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:38 crc kubenswrapper[5114]: I1210 16:00:38.999066 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e4486b7-a398-422f-b2fc-708035002e4c-build-ca-bundles\") pod \"9e4486b7-a398-422f-b2fc-708035002e4c\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " Dec 10 16:00:38 crc kubenswrapper[5114]: I1210 16:00:38.999155 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9e4486b7-a398-422f-b2fc-708035002e4c-build-system-configs\") pod \"9e4486b7-a398-422f-b2fc-708035002e4c\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " Dec 10 16:00:38 crc kubenswrapper[5114]: I1210 16:00:38.999181 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9e4486b7-a398-422f-b2fc-708035002e4c-node-pullsecrets\") pod \"9e4486b7-a398-422f-b2fc-708035002e4c\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " Dec 10 16:00:38 crc kubenswrapper[5114]: I1210 16:00:38.999220 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9e4486b7-a398-422f-b2fc-708035002e4c-buildcachedir\") pod \"9e4486b7-a398-422f-b2fc-708035002e4c\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " Dec 10 16:00:38 crc kubenswrapper[5114]: I1210 16:00:38.999301 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9e4486b7-a398-422f-b2fc-708035002e4c-container-storage-root\") pod \"9e4486b7-a398-422f-b2fc-708035002e4c\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " Dec 10 16:00:38 crc kubenswrapper[5114]: I1210 16:00:38.999352 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e4486b7-a398-422f-b2fc-708035002e4c-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "9e4486b7-a398-422f-b2fc-708035002e4c" (UID: "9e4486b7-a398-422f-b2fc-708035002e4c"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 16:00:38 crc kubenswrapper[5114]: I1210 16:00:38.999369 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9e4486b7-a398-422f-b2fc-708035002e4c-container-storage-run\") pod \"9e4486b7-a398-422f-b2fc-708035002e4c\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " Dec 10 16:00:38 crc kubenswrapper[5114]: I1210 16:00:38.999436 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e4486b7-a398-422f-b2fc-708035002e4c-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "9e4486b7-a398-422f-b2fc-708035002e4c" (UID: "9e4486b7-a398-422f-b2fc-708035002e4c"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 16:00:38 crc kubenswrapper[5114]: I1210 16:00:38.999488 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e4486b7-a398-422f-b2fc-708035002e4c-build-proxy-ca-bundles\") pod \"9e4486b7-a398-422f-b2fc-708035002e4c\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " Dec 10 16:00:38 crc kubenswrapper[5114]: I1210 16:00:38.999523 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ltml\" (UniqueName: \"kubernetes.io/projected/9e4486b7-a398-422f-b2fc-708035002e4c-kube-api-access-7ltml\") pod \"9e4486b7-a398-422f-b2fc-708035002e4c\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " Dec 10 16:00:38 crc kubenswrapper[5114]: I1210 16:00:38.999580 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9e4486b7-a398-422f-b2fc-708035002e4c-build-blob-cache\") pod \"9e4486b7-a398-422f-b2fc-708035002e4c\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " Dec 10 16:00:38 crc kubenswrapper[5114]: I1210 16:00:38.999692 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e4486b7-a398-422f-b2fc-708035002e4c-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "9e4486b7-a398-422f-b2fc-708035002e4c" (UID: "9e4486b7-a398-422f-b2fc-708035002e4c"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 16:00:38 crc kubenswrapper[5114]: I1210 16:00:38.999731 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e4486b7-a398-422f-b2fc-708035002e4c-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "9e4486b7-a398-422f-b2fc-708035002e4c" (UID: "9e4486b7-a398-422f-b2fc-708035002e4c"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 16:00:38 crc kubenswrapper[5114]: I1210 16:00:38.999774 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-rfsxx-pull\" (UniqueName: \"kubernetes.io/secret/9e4486b7-a398-422f-b2fc-708035002e4c-builder-dockercfg-rfsxx-pull\") pod \"9e4486b7-a398-422f-b2fc-708035002e4c\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " Dec 10 16:00:38 crc kubenswrapper[5114]: I1210 16:00:38.999848 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-rfsxx-push\" (UniqueName: \"kubernetes.io/secret/9e4486b7-a398-422f-b2fc-708035002e4c-builder-dockercfg-rfsxx-push\") pod \"9e4486b7-a398-422f-b2fc-708035002e4c\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " Dec 10 16:00:38 crc kubenswrapper[5114]: I1210 16:00:38.999875 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e4486b7-a398-422f-b2fc-708035002e4c-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "9e4486b7-a398-422f-b2fc-708035002e4c" (UID: "9e4486b7-a398-422f-b2fc-708035002e4c"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 16:00:38 crc kubenswrapper[5114]: I1210 16:00:38.999882 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9e4486b7-a398-422f-b2fc-708035002e4c-buildworkdir\") pod \"9e4486b7-a398-422f-b2fc-708035002e4c\" (UID: \"9e4486b7-a398-422f-b2fc-708035002e4c\") " Dec 10 16:00:39 crc kubenswrapper[5114]: I1210 16:00:39.000238 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e4486b7-a398-422f-b2fc-708035002e4c-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "9e4486b7-a398-422f-b2fc-708035002e4c" (UID: "9e4486b7-a398-422f-b2fc-708035002e4c"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 16:00:39 crc kubenswrapper[5114]: I1210 16:00:39.000249 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e4486b7-a398-422f-b2fc-708035002e4c-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "9e4486b7-a398-422f-b2fc-708035002e4c" (UID: "9e4486b7-a398-422f-b2fc-708035002e4c"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 16:00:39 crc kubenswrapper[5114]: I1210 16:00:39.000504 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e4486b7-a398-422f-b2fc-708035002e4c-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "9e4486b7-a398-422f-b2fc-708035002e4c" (UID: "9e4486b7-a398-422f-b2fc-708035002e4c"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 16:00:39 crc kubenswrapper[5114]: I1210 16:00:39.000830 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e4486b7-a398-422f-b2fc-708035002e4c-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "9e4486b7-a398-422f-b2fc-708035002e4c" (UID: "9e4486b7-a398-422f-b2fc-708035002e4c"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 16:00:39 crc kubenswrapper[5114]: I1210 16:00:39.001659 5114 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9e4486b7-a398-422f-b2fc-708035002e4c-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:39 crc kubenswrapper[5114]: I1210 16:00:39.001697 5114 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9e4486b7-a398-422f-b2fc-708035002e4c-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:39 crc kubenswrapper[5114]: I1210 16:00:39.001716 5114 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9e4486b7-a398-422f-b2fc-708035002e4c-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:39 crc kubenswrapper[5114]: I1210 16:00:39.001738 5114 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9e4486b7-a398-422f-b2fc-708035002e4c-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:39 crc kubenswrapper[5114]: I1210 16:00:39.001758 5114 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e4486b7-a398-422f-b2fc-708035002e4c-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:39 crc kubenswrapper[5114]: I1210 16:00:39.001775 5114 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9e4486b7-a398-422f-b2fc-708035002e4c-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:39 crc kubenswrapper[5114]: I1210 16:00:39.001792 5114 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9e4486b7-a398-422f-b2fc-708035002e4c-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:39 crc kubenswrapper[5114]: I1210 16:00:39.001809 5114 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e4486b7-a398-422f-b2fc-708035002e4c-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:39 crc kubenswrapper[5114]: I1210 16:00:39.001825 5114 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9e4486b7-a398-422f-b2fc-708035002e4c-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:39 crc kubenswrapper[5114]: I1210 16:00:39.005600 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e4486b7-a398-422f-b2fc-708035002e4c-builder-dockercfg-rfsxx-pull" (OuterVolumeSpecName: "builder-dockercfg-rfsxx-pull") pod "9e4486b7-a398-422f-b2fc-708035002e4c" (UID: "9e4486b7-a398-422f-b2fc-708035002e4c"). InnerVolumeSpecName "builder-dockercfg-rfsxx-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 16:00:39 crc kubenswrapper[5114]: I1210 16:00:39.006169 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e4486b7-a398-422f-b2fc-708035002e4c-kube-api-access-7ltml" (OuterVolumeSpecName: "kube-api-access-7ltml") pod "9e4486b7-a398-422f-b2fc-708035002e4c" (UID: "9e4486b7-a398-422f-b2fc-708035002e4c"). InnerVolumeSpecName "kube-api-access-7ltml". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 16:00:39 crc kubenswrapper[5114]: I1210 16:00:39.013132 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e4486b7-a398-422f-b2fc-708035002e4c-builder-dockercfg-rfsxx-push" (OuterVolumeSpecName: "builder-dockercfg-rfsxx-push") pod "9e4486b7-a398-422f-b2fc-708035002e4c" (UID: "9e4486b7-a398-422f-b2fc-708035002e4c"). InnerVolumeSpecName "builder-dockercfg-rfsxx-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 16:00:39 crc kubenswrapper[5114]: I1210 16:00:39.103701 5114 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-rfsxx-push\" (UniqueName: \"kubernetes.io/secret/9e4486b7-a398-422f-b2fc-708035002e4c-builder-dockercfg-rfsxx-push\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:39 crc kubenswrapper[5114]: I1210 16:00:39.103763 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7ltml\" (UniqueName: \"kubernetes.io/projected/9e4486b7-a398-422f-b2fc-708035002e4c-kube-api-access-7ltml\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:39 crc kubenswrapper[5114]: I1210 16:00:39.103776 5114 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-rfsxx-pull\" (UniqueName: \"kubernetes.io/secret/9e4486b7-a398-422f-b2fc-708035002e4c-builder-dockercfg-rfsxx-pull\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:39 crc kubenswrapper[5114]: I1210 16:00:39.531515 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_9e4486b7-a398-422f-b2fc-708035002e4c/git-clone/0.log" Dec 10 16:00:39 crc kubenswrapper[5114]: I1210 16:00:39.531856 5114 generic.go:358] "Generic (PLEG): container finished" podID="9e4486b7-a398-422f-b2fc-708035002e4c" containerID="d2caee3b71051f8546616b86da37330842cf11deb439b862a8254ca377a52399" exitCode=1 Dec 10 16:00:39 crc kubenswrapper[5114]: I1210 16:00:39.531999 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Dec 10 16:00:39 crc kubenswrapper[5114]: I1210 16:00:39.532025 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"9e4486b7-a398-422f-b2fc-708035002e4c","Type":"ContainerDied","Data":"d2caee3b71051f8546616b86da37330842cf11deb439b862a8254ca377a52399"} Dec 10 16:00:39 crc kubenswrapper[5114]: I1210 16:00:39.532071 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"9e4486b7-a398-422f-b2fc-708035002e4c","Type":"ContainerDied","Data":"2e2906291733078e815a94121e34b43e4102b469fe4ba9ec203719b45f3ad259"} Dec 10 16:00:39 crc kubenswrapper[5114]: I1210 16:00:39.532092 5114 scope.go:117] "RemoveContainer" containerID="d2caee3b71051f8546616b86da37330842cf11deb439b862a8254ca377a52399" Dec 10 16:00:39 crc kubenswrapper[5114]: I1210 16:00:39.557685 5114 scope.go:117] "RemoveContainer" containerID="d2caee3b71051f8546616b86da37330842cf11deb439b862a8254ca377a52399" Dec 10 16:00:39 crc kubenswrapper[5114]: E1210 16:00:39.558355 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2caee3b71051f8546616b86da37330842cf11deb439b862a8254ca377a52399\": container with ID starting with d2caee3b71051f8546616b86da37330842cf11deb439b862a8254ca377a52399 not found: ID does not exist" containerID="d2caee3b71051f8546616b86da37330842cf11deb439b862a8254ca377a52399" Dec 10 16:00:39 crc kubenswrapper[5114]: I1210 16:00:39.558392 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2caee3b71051f8546616b86da37330842cf11deb439b862a8254ca377a52399"} err="failed to get container status \"d2caee3b71051f8546616b86da37330842cf11deb439b862a8254ca377a52399\": rpc error: code = NotFound desc = could not find container \"d2caee3b71051f8546616b86da37330842cf11deb439b862a8254ca377a52399\": container with ID starting with d2caee3b71051f8546616b86da37330842cf11deb439b862a8254ca377a52399 not found: ID does not exist" Dec 10 16:00:39 crc kubenswrapper[5114]: I1210 16:00:39.570530 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 10 16:00:39 crc kubenswrapper[5114]: I1210 16:00:39.593550 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 10 16:00:40 crc kubenswrapper[5114]: I1210 16:00:40.578777 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e4486b7-a398-422f-b2fc-708035002e4c" path="/var/lib/kubelet/pods/9e4486b7-a398-422f-b2fc-708035002e4c/volumes" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.054811 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.056128 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9e4486b7-a398-422f-b2fc-708035002e4c" containerName="git-clone" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.056145 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e4486b7-a398-422f-b2fc-708035002e4c" containerName="git-clone" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.056265 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="9e4486b7-a398-422f-b2fc-708035002e4c" containerName="git-clone" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.063759 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.065374 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-rfsxx\"" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.065865 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-sys-config\"" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.066356 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-ca\"" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.066606 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-global-ca\"" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.076495 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.142642 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9bb83b20-1179-43e4-868d-354c8a94f6be-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.142692 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9bb83b20-1179-43e4-868d-354c8a94f6be-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.142739 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvzmd\" (UniqueName: \"kubernetes.io/projected/9bb83b20-1179-43e4-868d-354c8a94f6be-kube-api-access-vvzmd\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.142771 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9bb83b20-1179-43e4-868d-354c8a94f6be-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.142801 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bb83b20-1179-43e4-868d-354c8a94f6be-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.142822 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-rfsxx-pull\" (UniqueName: \"kubernetes.io/secret/9bb83b20-1179-43e4-868d-354c8a94f6be-builder-dockercfg-rfsxx-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.142953 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9bb83b20-1179-43e4-868d-354c8a94f6be-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.142997 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-rfsxx-push\" (UniqueName: \"kubernetes.io/secret/9bb83b20-1179-43e4-868d-354c8a94f6be-builder-dockercfg-rfsxx-push\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.143020 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9bb83b20-1179-43e4-868d-354c8a94f6be-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.143205 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9bb83b20-1179-43e4-868d-354c8a94f6be-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.143241 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bb83b20-1179-43e4-868d-354c8a94f6be-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.143264 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9bb83b20-1179-43e4-868d-354c8a94f6be-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.244976 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-rfsxx-pull\" (UniqueName: \"kubernetes.io/secret/9bb83b20-1179-43e4-868d-354c8a94f6be-builder-dockercfg-rfsxx-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.245068 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9bb83b20-1179-43e4-868d-354c8a94f6be-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.245099 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-rfsxx-push\" (UniqueName: \"kubernetes.io/secret/9bb83b20-1179-43e4-868d-354c8a94f6be-builder-dockercfg-rfsxx-push\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.245129 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9bb83b20-1179-43e4-868d-354c8a94f6be-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.245198 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9bb83b20-1179-43e4-868d-354c8a94f6be-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.245220 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bb83b20-1179-43e4-868d-354c8a94f6be-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.245240 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9bb83b20-1179-43e4-868d-354c8a94f6be-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.245292 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9bb83b20-1179-43e4-868d-354c8a94f6be-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.245315 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9bb83b20-1179-43e4-868d-354c8a94f6be-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.245336 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vvzmd\" (UniqueName: \"kubernetes.io/projected/9bb83b20-1179-43e4-868d-354c8a94f6be-kube-api-access-vvzmd\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.245389 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9bb83b20-1179-43e4-868d-354c8a94f6be-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.245426 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bb83b20-1179-43e4-868d-354c8a94f6be-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.245855 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9bb83b20-1179-43e4-868d-354c8a94f6be-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.245943 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9bb83b20-1179-43e4-868d-354c8a94f6be-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.246024 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9bb83b20-1179-43e4-868d-354c8a94f6be-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.246136 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9bb83b20-1179-43e4-868d-354c8a94f6be-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.246198 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9bb83b20-1179-43e4-868d-354c8a94f6be-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.246334 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9bb83b20-1179-43e4-868d-354c8a94f6be-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.246337 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9bb83b20-1179-43e4-868d-354c8a94f6be-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.246754 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bb83b20-1179-43e4-868d-354c8a94f6be-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.246934 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bb83b20-1179-43e4-868d-354c8a94f6be-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.251916 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-rfsxx-pull\" (UniqueName: \"kubernetes.io/secret/9bb83b20-1179-43e4-868d-354c8a94f6be-builder-dockercfg-rfsxx-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.260163 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-rfsxx-push\" (UniqueName: \"kubernetes.io/secret/9bb83b20-1179-43e4-868d-354c8a94f6be-builder-dockercfg-rfsxx-push\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.263232 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvzmd\" (UniqueName: \"kubernetes.io/projected/9bb83b20-1179-43e4-868d-354c8a94f6be-kube-api-access-vvzmd\") pod \"service-telemetry-operator-5-build\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.380702 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:49 crc kubenswrapper[5114]: I1210 16:00:49.595549 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 10 16:00:50 crc kubenswrapper[5114]: I1210 16:00:50.603787 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"9bb83b20-1179-43e4-868d-354c8a94f6be","Type":"ContainerStarted","Data":"103d12a86377d432b13fe3bc22221f13ab6a3c78bf3375479713a068d10d36e7"} Dec 10 16:00:50 crc kubenswrapper[5114]: I1210 16:00:50.604100 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"9bb83b20-1179-43e4-868d-354c8a94f6be","Type":"ContainerStarted","Data":"09d6ddfd8aca95805d62514c4c2c931330628e5fbb8eef6e2356a8dc7cd7fe68"} Dec 10 16:00:50 crc kubenswrapper[5114]: I1210 16:00:50.651033 5114 ???:1] "http: TLS handshake error from 192.168.126.11:53554: no serving certificate available for the kubelet" Dec 10 16:00:51 crc kubenswrapper[5114]: I1210 16:00:51.681509 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 10 16:00:51 crc kubenswrapper[5114]: I1210 16:00:51.876749 5114 patch_prober.go:28] interesting pod/machine-config-daemon-pvhhc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:00:51 crc kubenswrapper[5114]: I1210 16:00:51.876834 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" podUID="b38ac556-07b2-4e25-9595-6adae4fcecb7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:00:52 crc kubenswrapper[5114]: I1210 16:00:52.620761 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-5-build" podUID="9bb83b20-1179-43e4-868d-354c8a94f6be" containerName="git-clone" containerID="cri-o://103d12a86377d432b13fe3bc22221f13ab6a3c78bf3375479713a068d10d36e7" gracePeriod=30 Dec 10 16:00:52 crc kubenswrapper[5114]: I1210 16:00:52.978539 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_9bb83b20-1179-43e4-868d-354c8a94f6be/git-clone/0.log" Dec 10 16:00:52 crc kubenswrapper[5114]: I1210 16:00:52.978873 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.098846 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9bb83b20-1179-43e4-868d-354c8a94f6be-container-storage-root\") pod \"9bb83b20-1179-43e4-868d-354c8a94f6be\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.099059 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9bb83b20-1179-43e4-868d-354c8a94f6be-container-storage-run\") pod \"9bb83b20-1179-43e4-868d-354c8a94f6be\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.099120 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-rfsxx-push\" (UniqueName: \"kubernetes.io/secret/9bb83b20-1179-43e4-868d-354c8a94f6be-builder-dockercfg-rfsxx-push\") pod \"9bb83b20-1179-43e4-868d-354c8a94f6be\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.099167 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9bb83b20-1179-43e4-868d-354c8a94f6be-buildworkdir\") pod \"9bb83b20-1179-43e4-868d-354c8a94f6be\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.099201 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9bb83b20-1179-43e4-868d-354c8a94f6be-buildcachedir\") pod \"9bb83b20-1179-43e4-868d-354c8a94f6be\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.099220 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9bb83b20-1179-43e4-868d-354c8a94f6be-node-pullsecrets\") pod \"9bb83b20-1179-43e4-868d-354c8a94f6be\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.099254 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9bb83b20-1179-43e4-868d-354c8a94f6be-build-system-configs\") pod \"9bb83b20-1179-43e4-868d-354c8a94f6be\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.099264 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bb83b20-1179-43e4-868d-354c8a94f6be-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "9bb83b20-1179-43e4-868d-354c8a94f6be" (UID: "9bb83b20-1179-43e4-868d-354c8a94f6be"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.099302 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-rfsxx-pull\" (UniqueName: \"kubernetes.io/secret/9bb83b20-1179-43e4-868d-354c8a94f6be-builder-dockercfg-rfsxx-pull\") pod \"9bb83b20-1179-43e4-868d-354c8a94f6be\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.099375 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvzmd\" (UniqueName: \"kubernetes.io/projected/9bb83b20-1179-43e4-868d-354c8a94f6be-kube-api-access-vvzmd\") pod \"9bb83b20-1179-43e4-868d-354c8a94f6be\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.099400 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9bb83b20-1179-43e4-868d-354c8a94f6be-build-blob-cache\") pod \"9bb83b20-1179-43e4-868d-354c8a94f6be\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.099460 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bb83b20-1179-43e4-868d-354c8a94f6be-build-ca-bundles\") pod \"9bb83b20-1179-43e4-868d-354c8a94f6be\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.099449 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bb83b20-1179-43e4-868d-354c8a94f6be-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "9bb83b20-1179-43e4-868d-354c8a94f6be" (UID: "9bb83b20-1179-43e4-868d-354c8a94f6be"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.099477 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bb83b20-1179-43e4-868d-354c8a94f6be-build-proxy-ca-bundles\") pod \"9bb83b20-1179-43e4-868d-354c8a94f6be\" (UID: \"9bb83b20-1179-43e4-868d-354c8a94f6be\") " Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.099729 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bb83b20-1179-43e4-868d-354c8a94f6be-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "9bb83b20-1179-43e4-868d-354c8a94f6be" (UID: "9bb83b20-1179-43e4-868d-354c8a94f6be"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.099808 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bb83b20-1179-43e4-868d-354c8a94f6be-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "9bb83b20-1179-43e4-868d-354c8a94f6be" (UID: "9bb83b20-1179-43e4-868d-354c8a94f6be"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.100036 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bb83b20-1179-43e4-868d-354c8a94f6be-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "9bb83b20-1179-43e4-868d-354c8a94f6be" (UID: "9bb83b20-1179-43e4-868d-354c8a94f6be"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.100082 5114 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9bb83b20-1179-43e4-868d-354c8a94f6be-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.100101 5114 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9bb83b20-1179-43e4-868d-354c8a94f6be-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.100101 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bb83b20-1179-43e4-868d-354c8a94f6be-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "9bb83b20-1179-43e4-868d-354c8a94f6be" (UID: "9bb83b20-1179-43e4-868d-354c8a94f6be"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.100114 5114 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9bb83b20-1179-43e4-868d-354c8a94f6be-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.100132 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bb83b20-1179-43e4-868d-354c8a94f6be-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "9bb83b20-1179-43e4-868d-354c8a94f6be" (UID: "9bb83b20-1179-43e4-868d-354c8a94f6be"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.100148 5114 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9bb83b20-1179-43e4-868d-354c8a94f6be-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.100165 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bb83b20-1179-43e4-868d-354c8a94f6be-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "9bb83b20-1179-43e4-868d-354c8a94f6be" (UID: "9bb83b20-1179-43e4-868d-354c8a94f6be"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.100395 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bb83b20-1179-43e4-868d-354c8a94f6be-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "9bb83b20-1179-43e4-868d-354c8a94f6be" (UID: "9bb83b20-1179-43e4-868d-354c8a94f6be"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.103716 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bb83b20-1179-43e4-868d-354c8a94f6be-builder-dockercfg-rfsxx-push" (OuterVolumeSpecName: "builder-dockercfg-rfsxx-push") pod "9bb83b20-1179-43e4-868d-354c8a94f6be" (UID: "9bb83b20-1179-43e4-868d-354c8a94f6be"). InnerVolumeSpecName "builder-dockercfg-rfsxx-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.103809 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bb83b20-1179-43e4-868d-354c8a94f6be-builder-dockercfg-rfsxx-pull" (OuterVolumeSpecName: "builder-dockercfg-rfsxx-pull") pod "9bb83b20-1179-43e4-868d-354c8a94f6be" (UID: "9bb83b20-1179-43e4-868d-354c8a94f6be"). InnerVolumeSpecName "builder-dockercfg-rfsxx-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.104056 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bb83b20-1179-43e4-868d-354c8a94f6be-kube-api-access-vvzmd" (OuterVolumeSpecName: "kube-api-access-vvzmd") pod "9bb83b20-1179-43e4-868d-354c8a94f6be" (UID: "9bb83b20-1179-43e4-868d-354c8a94f6be"). InnerVolumeSpecName "kube-api-access-vvzmd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.201124 5114 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9bb83b20-1179-43e4-868d-354c8a94f6be-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.201188 5114 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-rfsxx-push\" (UniqueName: \"kubernetes.io/secret/9bb83b20-1179-43e4-868d-354c8a94f6be-builder-dockercfg-rfsxx-push\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.201202 5114 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9bb83b20-1179-43e4-868d-354c8a94f6be-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.201212 5114 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-rfsxx-pull\" (UniqueName: \"kubernetes.io/secret/9bb83b20-1179-43e4-868d-354c8a94f6be-builder-dockercfg-rfsxx-pull\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.201222 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vvzmd\" (UniqueName: \"kubernetes.io/projected/9bb83b20-1179-43e4-868d-354c8a94f6be-kube-api-access-vvzmd\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.201233 5114 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9bb83b20-1179-43e4-868d-354c8a94f6be-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.201243 5114 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bb83b20-1179-43e4-868d-354c8a94f6be-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.201254 5114 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bb83b20-1179-43e4-868d-354c8a94f6be-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.628901 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_9bb83b20-1179-43e4-868d-354c8a94f6be/git-clone/0.log" Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.629149 5114 generic.go:358] "Generic (PLEG): container finished" podID="9bb83b20-1179-43e4-868d-354c8a94f6be" containerID="103d12a86377d432b13fe3bc22221f13ab6a3c78bf3375479713a068d10d36e7" exitCode=1 Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.629264 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"9bb83b20-1179-43e4-868d-354c8a94f6be","Type":"ContainerDied","Data":"103d12a86377d432b13fe3bc22221f13ab6a3c78bf3375479713a068d10d36e7"} Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.629346 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"9bb83b20-1179-43e4-868d-354c8a94f6be","Type":"ContainerDied","Data":"09d6ddfd8aca95805d62514c4c2c931330628e5fbb8eef6e2356a8dc7cd7fe68"} Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.629384 5114 scope.go:117] "RemoveContainer" containerID="103d12a86377d432b13fe3bc22221f13ab6a3c78bf3375479713a068d10d36e7" Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.629416 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.656921 5114 scope.go:117] "RemoveContainer" containerID="103d12a86377d432b13fe3bc22221f13ab6a3c78bf3375479713a068d10d36e7" Dec 10 16:00:53 crc kubenswrapper[5114]: E1210 16:00:53.657906 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"103d12a86377d432b13fe3bc22221f13ab6a3c78bf3375479713a068d10d36e7\": container with ID starting with 103d12a86377d432b13fe3bc22221f13ab6a3c78bf3375479713a068d10d36e7 not found: ID does not exist" containerID="103d12a86377d432b13fe3bc22221f13ab6a3c78bf3375479713a068d10d36e7" Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.657940 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"103d12a86377d432b13fe3bc22221f13ab6a3c78bf3375479713a068d10d36e7"} err="failed to get container status \"103d12a86377d432b13fe3bc22221f13ab6a3c78bf3375479713a068d10d36e7\": rpc error: code = NotFound desc = could not find container \"103d12a86377d432b13fe3bc22221f13ab6a3c78bf3375479713a068d10d36e7\": container with ID starting with 103d12a86377d432b13fe3bc22221f13ab6a3c78bf3375479713a068d10d36e7 not found: ID does not exist" Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.672429 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 10 16:00:53 crc kubenswrapper[5114]: I1210 16:00:53.679125 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 10 16:00:54 crc kubenswrapper[5114]: I1210 16:00:54.577539 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bb83b20-1179-43e4-868d-354c8a94f6be" path="/var/lib/kubelet/pods/9bb83b20-1179-43e4-868d-354c8a94f6be/volumes" Dec 10 16:01:14 crc kubenswrapper[5114]: I1210 16:01:14.892804 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lg6m5_e7c683ba-536f-45e5-89b0-fe14989cad13/kube-multus/0.log" Dec 10 16:01:14 crc kubenswrapper[5114]: I1210 16:01:14.901181 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 10 16:01:14 crc kubenswrapper[5114]: I1210 16:01:14.902076 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lg6m5_e7c683ba-536f-45e5-89b0-fe14989cad13/kube-multus/0.log" Dec 10 16:01:14 crc kubenswrapper[5114]: I1210 16:01:14.911498 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 10 16:01:21 crc kubenswrapper[5114]: I1210 16:01:21.877115 5114 patch_prober.go:28] interesting pod/machine-config-daemon-pvhhc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:01:21 crc kubenswrapper[5114]: I1210 16:01:21.877721 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" podUID="b38ac556-07b2-4e25-9595-6adae4fcecb7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:01:38 crc kubenswrapper[5114]: I1210 16:01:38.887179 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-k6d47/must-gather-zpv7l"] Dec 10 16:01:38 crc kubenswrapper[5114]: I1210 16:01:38.888640 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9bb83b20-1179-43e4-868d-354c8a94f6be" containerName="git-clone" Dec 10 16:01:38 crc kubenswrapper[5114]: I1210 16:01:38.888657 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bb83b20-1179-43e4-868d-354c8a94f6be" containerName="git-clone" Dec 10 16:01:38 crc kubenswrapper[5114]: I1210 16:01:38.888780 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="9bb83b20-1179-43e4-868d-354c8a94f6be" containerName="git-clone" Dec 10 16:01:38 crc kubenswrapper[5114]: I1210 16:01:38.920289 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-k6d47/must-gather-zpv7l"] Dec 10 16:01:38 crc kubenswrapper[5114]: I1210 16:01:38.920509 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k6d47/must-gather-zpv7l" Dec 10 16:01:38 crc kubenswrapper[5114]: I1210 16:01:38.922254 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-k6d47\"/\"default-dockercfg-n98th\"" Dec 10 16:01:38 crc kubenswrapper[5114]: I1210 16:01:38.922740 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-k6d47\"/\"kube-root-ca.crt\"" Dec 10 16:01:38 crc kubenswrapper[5114]: I1210 16:01:38.922799 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-k6d47\"/\"openshift-service-ca.crt\"" Dec 10 16:01:38 crc kubenswrapper[5114]: I1210 16:01:38.973201 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv7lc\" (UniqueName: \"kubernetes.io/projected/1115490a-10b8-4638-8eba-11920b06bdf0-kube-api-access-zv7lc\") pod \"must-gather-zpv7l\" (UID: \"1115490a-10b8-4638-8eba-11920b06bdf0\") " pod="openshift-must-gather-k6d47/must-gather-zpv7l" Dec 10 16:01:38 crc kubenswrapper[5114]: I1210 16:01:38.973284 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1115490a-10b8-4638-8eba-11920b06bdf0-must-gather-output\") pod \"must-gather-zpv7l\" (UID: \"1115490a-10b8-4638-8eba-11920b06bdf0\") " pod="openshift-must-gather-k6d47/must-gather-zpv7l" Dec 10 16:01:39 crc kubenswrapper[5114]: I1210 16:01:39.074958 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1115490a-10b8-4638-8eba-11920b06bdf0-must-gather-output\") pod \"must-gather-zpv7l\" (UID: \"1115490a-10b8-4638-8eba-11920b06bdf0\") " pod="openshift-must-gather-k6d47/must-gather-zpv7l" Dec 10 16:01:39 crc kubenswrapper[5114]: I1210 16:01:39.075348 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zv7lc\" (UniqueName: \"kubernetes.io/projected/1115490a-10b8-4638-8eba-11920b06bdf0-kube-api-access-zv7lc\") pod \"must-gather-zpv7l\" (UID: \"1115490a-10b8-4638-8eba-11920b06bdf0\") " pod="openshift-must-gather-k6d47/must-gather-zpv7l" Dec 10 16:01:39 crc kubenswrapper[5114]: I1210 16:01:39.075532 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1115490a-10b8-4638-8eba-11920b06bdf0-must-gather-output\") pod \"must-gather-zpv7l\" (UID: \"1115490a-10b8-4638-8eba-11920b06bdf0\") " pod="openshift-must-gather-k6d47/must-gather-zpv7l" Dec 10 16:01:39 crc kubenswrapper[5114]: I1210 16:01:39.092387 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv7lc\" (UniqueName: \"kubernetes.io/projected/1115490a-10b8-4638-8eba-11920b06bdf0-kube-api-access-zv7lc\") pod \"must-gather-zpv7l\" (UID: \"1115490a-10b8-4638-8eba-11920b06bdf0\") " pod="openshift-must-gather-k6d47/must-gather-zpv7l" Dec 10 16:01:39 crc kubenswrapper[5114]: I1210 16:01:39.235939 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k6d47/must-gather-zpv7l" Dec 10 16:01:39 crc kubenswrapper[5114]: I1210 16:01:39.421876 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-k6d47/must-gather-zpv7l"] Dec 10 16:01:39 crc kubenswrapper[5114]: W1210 16:01:39.424410 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1115490a_10b8_4638_8eba_11920b06bdf0.slice/crio-a04a8615c69e6153b502fd17f7294cd0f991a1ae4811845b133ddf1cd88d63c1 WatchSource:0}: Error finding container a04a8615c69e6153b502fd17f7294cd0f991a1ae4811845b133ddf1cd88d63c1: Status 404 returned error can't find the container with id a04a8615c69e6153b502fd17f7294cd0f991a1ae4811845b133ddf1cd88d63c1 Dec 10 16:01:40 crc kubenswrapper[5114]: I1210 16:01:40.128679 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k6d47/must-gather-zpv7l" event={"ID":"1115490a-10b8-4638-8eba-11920b06bdf0","Type":"ContainerStarted","Data":"a04a8615c69e6153b502fd17f7294cd0f991a1ae4811845b133ddf1cd88d63c1"} Dec 10 16:01:51 crc kubenswrapper[5114]: I1210 16:01:51.876847 5114 patch_prober.go:28] interesting pod/machine-config-daemon-pvhhc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:01:51 crc kubenswrapper[5114]: I1210 16:01:51.877188 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" podUID="b38ac556-07b2-4e25-9595-6adae4fcecb7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:01:51 crc kubenswrapper[5114]: I1210 16:01:51.877240 5114 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" Dec 10 16:01:51 crc kubenswrapper[5114]: I1210 16:01:51.877928 5114 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"32e0cfb2943a8eeb1eb14112edafb4219bb4d51ca24ba6abc85b691ebf51d97a"} pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 16:01:51 crc kubenswrapper[5114]: I1210 16:01:51.878001 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" podUID="b38ac556-07b2-4e25-9595-6adae4fcecb7" containerName="machine-config-daemon" containerID="cri-o://32e0cfb2943a8eeb1eb14112edafb4219bb4d51ca24ba6abc85b691ebf51d97a" gracePeriod=600 Dec 10 16:01:52 crc kubenswrapper[5114]: I1210 16:01:52.215306 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k6d47/must-gather-zpv7l" event={"ID":"1115490a-10b8-4638-8eba-11920b06bdf0","Type":"ContainerStarted","Data":"4111f117cad630c4a38d678720efb491660e32643e3729e247f5fafec616bb00"} Dec 10 16:01:52 crc kubenswrapper[5114]: I1210 16:01:52.215675 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k6d47/must-gather-zpv7l" event={"ID":"1115490a-10b8-4638-8eba-11920b06bdf0","Type":"ContainerStarted","Data":"a88538d6dac9de3b4d9b53abfd023689524d241dd3fd8de72339d8afb04f9041"} Dec 10 16:01:52 crc kubenswrapper[5114]: I1210 16:01:52.218393 5114 generic.go:358] "Generic (PLEG): container finished" podID="b38ac556-07b2-4e25-9595-6adae4fcecb7" containerID="32e0cfb2943a8eeb1eb14112edafb4219bb4d51ca24ba6abc85b691ebf51d97a" exitCode=0 Dec 10 16:01:52 crc kubenswrapper[5114]: I1210 16:01:52.218467 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" event={"ID":"b38ac556-07b2-4e25-9595-6adae4fcecb7","Type":"ContainerDied","Data":"32e0cfb2943a8eeb1eb14112edafb4219bb4d51ca24ba6abc85b691ebf51d97a"} Dec 10 16:01:52 crc kubenswrapper[5114]: I1210 16:01:52.218521 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" event={"ID":"b38ac556-07b2-4e25-9595-6adae4fcecb7","Type":"ContainerStarted","Data":"2eb85c8fa03347d17519dbe6f59409b495aeaa50fb25ad16d3ca4bfe4a68b80b"} Dec 10 16:01:52 crc kubenswrapper[5114]: I1210 16:01:52.218540 5114 scope.go:117] "RemoveContainer" containerID="f8ac6cb7db909be515720174d5ba73e527683069dfdeb99dbbc7ffd78484ea8c" Dec 10 16:01:52 crc kubenswrapper[5114]: I1210 16:01:52.238255 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-k6d47/must-gather-zpv7l" podStartSLOduration=2.059653343 podStartE2EDuration="14.238230791s" podCreationTimestamp="2025-12-10 16:01:38 +0000 UTC" firstStartedPulling="2025-12-10 16:01:39.426026177 +0000 UTC m=+925.146827354" lastFinishedPulling="2025-12-10 16:01:51.604603625 +0000 UTC m=+937.325404802" observedRunningTime="2025-12-10 16:01:52.231127283 +0000 UTC m=+937.951928460" watchObservedRunningTime="2025-12-10 16:01:52.238230791 +0000 UTC m=+937.959031978" Dec 10 16:01:58 crc kubenswrapper[5114]: I1210 16:01:58.884492 5114 ???:1] "http: TLS handshake error from 192.168.126.11:58850: no serving certificate available for the kubelet" Dec 10 16:02:28 crc kubenswrapper[5114]: I1210 16:02:28.708369 5114 ???:1] "http: TLS handshake error from 192.168.126.11:51310: no serving certificate available for the kubelet" Dec 10 16:02:28 crc kubenswrapper[5114]: I1210 16:02:28.835991 5114 ???:1] "http: TLS handshake error from 192.168.126.11:51320: no serving certificate available for the kubelet" Dec 10 16:02:28 crc kubenswrapper[5114]: I1210 16:02:28.842934 5114 ???:1] "http: TLS handshake error from 192.168.126.11:51322: no serving certificate available for the kubelet" Dec 10 16:02:38 crc kubenswrapper[5114]: I1210 16:02:38.860890 5114 ???:1] "http: TLS handshake error from 192.168.126.11:52684: no serving certificate available for the kubelet" Dec 10 16:02:39 crc kubenswrapper[5114]: I1210 16:02:39.014457 5114 ???:1] "http: TLS handshake error from 192.168.126.11:52698: no serving certificate available for the kubelet" Dec 10 16:02:39 crc kubenswrapper[5114]: I1210 16:02:39.035318 5114 ???:1] "http: TLS handshake error from 192.168.126.11:52706: no serving certificate available for the kubelet" Dec 10 16:02:52 crc kubenswrapper[5114]: I1210 16:02:52.314315 5114 ???:1] "http: TLS handshake error from 192.168.126.11:43938: no serving certificate available for the kubelet" Dec 10 16:02:52 crc kubenswrapper[5114]: I1210 16:02:52.487857 5114 ???:1] "http: TLS handshake error from 192.168.126.11:43942: no serving certificate available for the kubelet" Dec 10 16:02:52 crc kubenswrapper[5114]: I1210 16:02:52.508209 5114 ???:1] "http: TLS handshake error from 192.168.126.11:43952: no serving certificate available for the kubelet" Dec 10 16:02:52 crc kubenswrapper[5114]: I1210 16:02:52.513601 5114 ???:1] "http: TLS handshake error from 192.168.126.11:43964: no serving certificate available for the kubelet" Dec 10 16:02:52 crc kubenswrapper[5114]: I1210 16:02:52.667426 5114 ???:1] "http: TLS handshake error from 192.168.126.11:43980: no serving certificate available for the kubelet" Dec 10 16:02:52 crc kubenswrapper[5114]: I1210 16:02:52.667852 5114 ???:1] "http: TLS handshake error from 192.168.126.11:43966: no serving certificate available for the kubelet" Dec 10 16:02:52 crc kubenswrapper[5114]: I1210 16:02:52.684445 5114 ???:1] "http: TLS handshake error from 192.168.126.11:43984: no serving certificate available for the kubelet" Dec 10 16:02:52 crc kubenswrapper[5114]: I1210 16:02:52.850796 5114 ???:1] "http: TLS handshake error from 192.168.126.11:43988: no serving certificate available for the kubelet" Dec 10 16:02:52 crc kubenswrapper[5114]: I1210 16:02:52.999593 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44000: no serving certificate available for the kubelet" Dec 10 16:02:53 crc kubenswrapper[5114]: I1210 16:02:53.003531 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44008: no serving certificate available for the kubelet" Dec 10 16:02:53 crc kubenswrapper[5114]: I1210 16:02:53.016367 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44016: no serving certificate available for the kubelet" Dec 10 16:02:53 crc kubenswrapper[5114]: I1210 16:02:53.172573 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44022: no serving certificate available for the kubelet" Dec 10 16:02:53 crc kubenswrapper[5114]: I1210 16:02:53.173809 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44024: no serving certificate available for the kubelet" Dec 10 16:02:53 crc kubenswrapper[5114]: I1210 16:02:53.190537 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44038: no serving certificate available for the kubelet" Dec 10 16:02:53 crc kubenswrapper[5114]: I1210 16:02:53.322352 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44054: no serving certificate available for the kubelet" Dec 10 16:02:53 crc kubenswrapper[5114]: I1210 16:02:53.486118 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44064: no serving certificate available for the kubelet" Dec 10 16:02:53 crc kubenswrapper[5114]: I1210 16:02:53.489939 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44068: no serving certificate available for the kubelet" Dec 10 16:02:53 crc kubenswrapper[5114]: I1210 16:02:53.509067 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44080: no serving certificate available for the kubelet" Dec 10 16:02:53 crc kubenswrapper[5114]: I1210 16:02:53.680474 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44096: no serving certificate available for the kubelet" Dec 10 16:02:53 crc kubenswrapper[5114]: I1210 16:02:53.680927 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44098: no serving certificate available for the kubelet" Dec 10 16:02:53 crc kubenswrapper[5114]: I1210 16:02:53.709842 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44102: no serving certificate available for the kubelet" Dec 10 16:02:53 crc kubenswrapper[5114]: I1210 16:02:53.831083 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44112: no serving certificate available for the kubelet" Dec 10 16:02:53 crc kubenswrapper[5114]: I1210 16:02:53.983536 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44118: no serving certificate available for the kubelet" Dec 10 16:02:53 crc kubenswrapper[5114]: I1210 16:02:53.997184 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44130: no serving certificate available for the kubelet" Dec 10 16:02:54 crc kubenswrapper[5114]: I1210 16:02:54.018546 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44142: no serving certificate available for the kubelet" Dec 10 16:02:54 crc kubenswrapper[5114]: I1210 16:02:54.183322 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44148: no serving certificate available for the kubelet" Dec 10 16:02:54 crc kubenswrapper[5114]: I1210 16:02:54.196292 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44154: no serving certificate available for the kubelet" Dec 10 16:02:54 crc kubenswrapper[5114]: I1210 16:02:54.200732 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44160: no serving certificate available for the kubelet" Dec 10 16:02:54 crc kubenswrapper[5114]: I1210 16:02:54.385838 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44172: no serving certificate available for the kubelet" Dec 10 16:02:54 crc kubenswrapper[5114]: I1210 16:02:54.504301 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44178: no serving certificate available for the kubelet" Dec 10 16:02:54 crc kubenswrapper[5114]: I1210 16:02:54.509582 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44190: no serving certificate available for the kubelet" Dec 10 16:02:54 crc kubenswrapper[5114]: I1210 16:02:54.535811 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44204: no serving certificate available for the kubelet" Dec 10 16:02:54 crc kubenswrapper[5114]: I1210 16:02:54.679362 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44212: no serving certificate available for the kubelet" Dec 10 16:02:54 crc kubenswrapper[5114]: I1210 16:02:54.693382 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44226: no serving certificate available for the kubelet" Dec 10 16:02:54 crc kubenswrapper[5114]: I1210 16:02:54.694166 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44234: no serving certificate available for the kubelet" Dec 10 16:02:54 crc kubenswrapper[5114]: I1210 16:02:54.846358 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44238: no serving certificate available for the kubelet" Dec 10 16:02:55 crc kubenswrapper[5114]: I1210 16:02:55.015947 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44244: no serving certificate available for the kubelet" Dec 10 16:02:55 crc kubenswrapper[5114]: I1210 16:02:55.038446 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44260: no serving certificate available for the kubelet" Dec 10 16:02:55 crc kubenswrapper[5114]: I1210 16:02:55.068458 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44274: no serving certificate available for the kubelet" Dec 10 16:02:55 crc kubenswrapper[5114]: I1210 16:02:55.242249 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44282: no serving certificate available for the kubelet" Dec 10 16:02:55 crc kubenswrapper[5114]: I1210 16:02:55.246018 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44284: no serving certificate available for the kubelet" Dec 10 16:02:55 crc kubenswrapper[5114]: I1210 16:02:55.258126 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44288: no serving certificate available for the kubelet" Dec 10 16:02:55 crc kubenswrapper[5114]: I1210 16:02:55.303217 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44298: no serving certificate available for the kubelet" Dec 10 16:02:55 crc kubenswrapper[5114]: I1210 16:02:55.408400 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44308: no serving certificate available for the kubelet" Dec 10 16:02:55 crc kubenswrapper[5114]: I1210 16:02:55.529867 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44320: no serving certificate available for the kubelet" Dec 10 16:02:55 crc kubenswrapper[5114]: I1210 16:02:55.551612 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44322: no serving certificate available for the kubelet" Dec 10 16:02:55 crc kubenswrapper[5114]: I1210 16:02:55.560196 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44332: no serving certificate available for the kubelet" Dec 10 16:02:55 crc kubenswrapper[5114]: I1210 16:02:55.687178 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44348: no serving certificate available for the kubelet" Dec 10 16:02:55 crc kubenswrapper[5114]: I1210 16:02:55.711929 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44364: no serving certificate available for the kubelet" Dec 10 16:02:55 crc kubenswrapper[5114]: I1210 16:02:55.712384 5114 ???:1] "http: TLS handshake error from 192.168.126.11:44374: no serving certificate available for the kubelet" Dec 10 16:03:06 crc kubenswrapper[5114]: I1210 16:03:06.022544 5114 ???:1] "http: TLS handshake error from 192.168.126.11:60250: no serving certificate available for the kubelet" Dec 10 16:03:06 crc kubenswrapper[5114]: I1210 16:03:06.189550 5114 ???:1] "http: TLS handshake error from 192.168.126.11:60258: no serving certificate available for the kubelet" Dec 10 16:03:06 crc kubenswrapper[5114]: I1210 16:03:06.192165 5114 ???:1] "http: TLS handshake error from 192.168.126.11:60266: no serving certificate available for the kubelet" Dec 10 16:03:06 crc kubenswrapper[5114]: I1210 16:03:06.371520 5114 ???:1] "http: TLS handshake error from 192.168.126.11:60268: no serving certificate available for the kubelet" Dec 10 16:03:06 crc kubenswrapper[5114]: I1210 16:03:06.373066 5114 ???:1] "http: TLS handshake error from 192.168.126.11:60270: no serving certificate available for the kubelet" Dec 10 16:03:07 crc kubenswrapper[5114]: E1210 16:03:07.578145 5114 certificate_manager.go:613] "Certificate request was not signed" err="timed out waiting for the condition" logger="kubernetes.io/kubelet-serving.UnhandledError" Dec 10 16:03:09 crc kubenswrapper[5114]: I1210 16:03:09.707053 5114 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 10 16:03:09 crc kubenswrapper[5114]: I1210 16:03:09.716711 5114 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 10 16:03:09 crc kubenswrapper[5114]: I1210 16:03:09.732828 5114 ???:1] "http: TLS handshake error from 192.168.126.11:60286: no serving certificate available for the kubelet" Dec 10 16:03:09 crc kubenswrapper[5114]: I1210 16:03:09.762358 5114 ???:1] "http: TLS handshake error from 192.168.126.11:60298: no serving certificate available for the kubelet" Dec 10 16:03:09 crc kubenswrapper[5114]: I1210 16:03:09.795090 5114 ???:1] "http: TLS handshake error from 192.168.126.11:60306: no serving certificate available for the kubelet" Dec 10 16:03:09 crc kubenswrapper[5114]: I1210 16:03:09.842065 5114 ???:1] "http: TLS handshake error from 192.168.126.11:60316: no serving certificate available for the kubelet" Dec 10 16:03:09 crc kubenswrapper[5114]: I1210 16:03:09.905023 5114 ???:1] "http: TLS handshake error from 192.168.126.11:60332: no serving certificate available for the kubelet" Dec 10 16:03:10 crc kubenswrapper[5114]: I1210 16:03:10.008639 5114 ???:1] "http: TLS handshake error from 192.168.126.11:60336: no serving certificate available for the kubelet" Dec 10 16:03:10 crc kubenswrapper[5114]: I1210 16:03:10.191547 5114 ???:1] "http: TLS handshake error from 192.168.126.11:60338: no serving certificate available for the kubelet" Dec 10 16:03:10 crc kubenswrapper[5114]: I1210 16:03:10.538815 5114 ???:1] "http: TLS handshake error from 192.168.126.11:33572: no serving certificate available for the kubelet" Dec 10 16:03:11 crc kubenswrapper[5114]: I1210 16:03:11.202932 5114 ???:1] "http: TLS handshake error from 192.168.126.11:33578: no serving certificate available for the kubelet" Dec 10 16:03:12 crc kubenswrapper[5114]: I1210 16:03:12.504146 5114 ???:1] "http: TLS handshake error from 192.168.126.11:33586: no serving certificate available for the kubelet" Dec 10 16:03:15 crc kubenswrapper[5114]: I1210 16:03:15.089497 5114 ???:1] "http: TLS handshake error from 192.168.126.11:33600: no serving certificate available for the kubelet" Dec 10 16:03:20 crc kubenswrapper[5114]: I1210 16:03:20.236455 5114 ???:1] "http: TLS handshake error from 192.168.126.11:33608: no serving certificate available for the kubelet" Dec 10 16:03:30 crc kubenswrapper[5114]: I1210 16:03:30.498512 5114 ???:1] "http: TLS handshake error from 192.168.126.11:51002: no serving certificate available for the kubelet" Dec 10 16:03:45 crc kubenswrapper[5114]: I1210 16:03:45.230099 5114 generic.go:358] "Generic (PLEG): container finished" podID="1115490a-10b8-4638-8eba-11920b06bdf0" containerID="a88538d6dac9de3b4d9b53abfd023689524d241dd3fd8de72339d8afb04f9041" exitCode=0 Dec 10 16:03:45 crc kubenswrapper[5114]: I1210 16:03:45.230193 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k6d47/must-gather-zpv7l" event={"ID":"1115490a-10b8-4638-8eba-11920b06bdf0","Type":"ContainerDied","Data":"a88538d6dac9de3b4d9b53abfd023689524d241dd3fd8de72339d8afb04f9041"} Dec 10 16:03:45 crc kubenswrapper[5114]: I1210 16:03:45.232127 5114 scope.go:117] "RemoveContainer" containerID="a88538d6dac9de3b4d9b53abfd023689524d241dd3fd8de72339d8afb04f9041" Dec 10 16:03:48 crc kubenswrapper[5114]: I1210 16:03:48.898921 5114 ???:1] "http: TLS handshake error from 192.168.126.11:57462: no serving certificate available for the kubelet" Dec 10 16:03:49 crc kubenswrapper[5114]: I1210 16:03:49.033139 5114 ???:1] "http: TLS handshake error from 192.168.126.11:57466: no serving certificate available for the kubelet" Dec 10 16:03:49 crc kubenswrapper[5114]: I1210 16:03:49.044319 5114 ???:1] "http: TLS handshake error from 192.168.126.11:57480: no serving certificate available for the kubelet" Dec 10 16:03:49 crc kubenswrapper[5114]: I1210 16:03:49.066609 5114 ???:1] "http: TLS handshake error from 192.168.126.11:57488: no serving certificate available for the kubelet" Dec 10 16:03:49 crc kubenswrapper[5114]: I1210 16:03:49.077610 5114 ???:1] "http: TLS handshake error from 192.168.126.11:57494: no serving certificate available for the kubelet" Dec 10 16:03:49 crc kubenswrapper[5114]: I1210 16:03:49.094535 5114 ???:1] "http: TLS handshake error from 192.168.126.11:57496: no serving certificate available for the kubelet" Dec 10 16:03:49 crc kubenswrapper[5114]: I1210 16:03:49.104644 5114 ???:1] "http: TLS handshake error from 192.168.126.11:57500: no serving certificate available for the kubelet" Dec 10 16:03:49 crc kubenswrapper[5114]: I1210 16:03:49.117696 5114 ???:1] "http: TLS handshake error from 192.168.126.11:57504: no serving certificate available for the kubelet" Dec 10 16:03:49 crc kubenswrapper[5114]: I1210 16:03:49.129574 5114 ???:1] "http: TLS handshake error from 192.168.126.11:57506: no serving certificate available for the kubelet" Dec 10 16:03:49 crc kubenswrapper[5114]: I1210 16:03:49.299664 5114 ???:1] "http: TLS handshake error from 192.168.126.11:57514: no serving certificate available for the kubelet" Dec 10 16:03:49 crc kubenswrapper[5114]: I1210 16:03:49.310840 5114 ???:1] "http: TLS handshake error from 192.168.126.11:57522: no serving certificate available for the kubelet" Dec 10 16:03:49 crc kubenswrapper[5114]: I1210 16:03:49.333161 5114 ???:1] "http: TLS handshake error from 192.168.126.11:57536: no serving certificate available for the kubelet" Dec 10 16:03:49 crc kubenswrapper[5114]: I1210 16:03:49.344834 5114 ???:1] "http: TLS handshake error from 192.168.126.11:57542: no serving certificate available for the kubelet" Dec 10 16:03:49 crc kubenswrapper[5114]: I1210 16:03:49.360365 5114 ???:1] "http: TLS handshake error from 192.168.126.11:57550: no serving certificate available for the kubelet" Dec 10 16:03:49 crc kubenswrapper[5114]: I1210 16:03:49.373376 5114 ???:1] "http: TLS handshake error from 192.168.126.11:57566: no serving certificate available for the kubelet" Dec 10 16:03:49 crc kubenswrapper[5114]: I1210 16:03:49.388265 5114 ???:1] "http: TLS handshake error from 192.168.126.11:57578: no serving certificate available for the kubelet" Dec 10 16:03:49 crc kubenswrapper[5114]: I1210 16:03:49.402460 5114 ???:1] "http: TLS handshake error from 192.168.126.11:57582: no serving certificate available for the kubelet" Dec 10 16:03:51 crc kubenswrapper[5114]: I1210 16:03:51.006702 5114 ???:1] "http: TLS handshake error from 192.168.126.11:41900: no serving certificate available for the kubelet" Dec 10 16:03:54 crc kubenswrapper[5114]: I1210 16:03:54.435777 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-k6d47/must-gather-zpv7l"] Dec 10 16:03:54 crc kubenswrapper[5114]: I1210 16:03:54.436414 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-k6d47/must-gather-zpv7l" podUID="1115490a-10b8-4638-8eba-11920b06bdf0" containerName="copy" containerID="cri-o://4111f117cad630c4a38d678720efb491660e32643e3729e247f5fafec616bb00" gracePeriod=2 Dec 10 16:03:54 crc kubenswrapper[5114]: I1210 16:03:54.440357 5114 status_manager.go:895] "Failed to get status for pod" podUID="1115490a-10b8-4638-8eba-11920b06bdf0" pod="openshift-must-gather-k6d47/must-gather-zpv7l" err="pods \"must-gather-zpv7l\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-k6d47\": no relationship found between node 'crc' and this object" Dec 10 16:03:54 crc kubenswrapper[5114]: I1210 16:03:54.442968 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-k6d47/must-gather-zpv7l"] Dec 10 16:03:54 crc kubenswrapper[5114]: I1210 16:03:54.574077 5114 status_manager.go:895] "Failed to get status for pod" podUID="1115490a-10b8-4638-8eba-11920b06bdf0" pod="openshift-must-gather-k6d47/must-gather-zpv7l" err="pods \"must-gather-zpv7l\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-k6d47\": no relationship found between node 'crc' and this object" Dec 10 16:03:54 crc kubenswrapper[5114]: I1210 16:03:54.779102 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-k6d47_must-gather-zpv7l_1115490a-10b8-4638-8eba-11920b06bdf0/copy/0.log" Dec 10 16:03:54 crc kubenswrapper[5114]: I1210 16:03:54.779833 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k6d47/must-gather-zpv7l" Dec 10 16:03:54 crc kubenswrapper[5114]: I1210 16:03:54.896610 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1115490a-10b8-4638-8eba-11920b06bdf0-must-gather-output\") pod \"1115490a-10b8-4638-8eba-11920b06bdf0\" (UID: \"1115490a-10b8-4638-8eba-11920b06bdf0\") " Dec 10 16:03:54 crc kubenswrapper[5114]: I1210 16:03:54.896703 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zv7lc\" (UniqueName: \"kubernetes.io/projected/1115490a-10b8-4638-8eba-11920b06bdf0-kube-api-access-zv7lc\") pod \"1115490a-10b8-4638-8eba-11920b06bdf0\" (UID: \"1115490a-10b8-4638-8eba-11920b06bdf0\") " Dec 10 16:03:54 crc kubenswrapper[5114]: I1210 16:03:54.907251 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1115490a-10b8-4638-8eba-11920b06bdf0-kube-api-access-zv7lc" (OuterVolumeSpecName: "kube-api-access-zv7lc") pod "1115490a-10b8-4638-8eba-11920b06bdf0" (UID: "1115490a-10b8-4638-8eba-11920b06bdf0"). InnerVolumeSpecName "kube-api-access-zv7lc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 10 16:03:54 crc kubenswrapper[5114]: I1210 16:03:54.939606 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1115490a-10b8-4638-8eba-11920b06bdf0-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "1115490a-10b8-4638-8eba-11920b06bdf0" (UID: "1115490a-10b8-4638-8eba-11920b06bdf0"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 10 16:03:54 crc kubenswrapper[5114]: I1210 16:03:54.998233 5114 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1115490a-10b8-4638-8eba-11920b06bdf0-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 10 16:03:54 crc kubenswrapper[5114]: I1210 16:03:54.998272 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zv7lc\" (UniqueName: \"kubernetes.io/projected/1115490a-10b8-4638-8eba-11920b06bdf0-kube-api-access-zv7lc\") on node \"crc\" DevicePath \"\"" Dec 10 16:03:55 crc kubenswrapper[5114]: I1210 16:03:55.293167 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-k6d47_must-gather-zpv7l_1115490a-10b8-4638-8eba-11920b06bdf0/copy/0.log" Dec 10 16:03:55 crc kubenswrapper[5114]: I1210 16:03:55.293547 5114 generic.go:358] "Generic (PLEG): container finished" podID="1115490a-10b8-4638-8eba-11920b06bdf0" containerID="4111f117cad630c4a38d678720efb491660e32643e3729e247f5fafec616bb00" exitCode=143 Dec 10 16:03:55 crc kubenswrapper[5114]: I1210 16:03:55.293631 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k6d47/must-gather-zpv7l" Dec 10 16:03:55 crc kubenswrapper[5114]: I1210 16:03:55.293692 5114 scope.go:117] "RemoveContainer" containerID="4111f117cad630c4a38d678720efb491660e32643e3729e247f5fafec616bb00" Dec 10 16:03:55 crc kubenswrapper[5114]: I1210 16:03:55.316515 5114 scope.go:117] "RemoveContainer" containerID="a88538d6dac9de3b4d9b53abfd023689524d241dd3fd8de72339d8afb04f9041" Dec 10 16:03:55 crc kubenswrapper[5114]: I1210 16:03:55.379576 5114 scope.go:117] "RemoveContainer" containerID="4111f117cad630c4a38d678720efb491660e32643e3729e247f5fafec616bb00" Dec 10 16:03:55 crc kubenswrapper[5114]: E1210 16:03:55.379919 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4111f117cad630c4a38d678720efb491660e32643e3729e247f5fafec616bb00\": container with ID starting with 4111f117cad630c4a38d678720efb491660e32643e3729e247f5fafec616bb00 not found: ID does not exist" containerID="4111f117cad630c4a38d678720efb491660e32643e3729e247f5fafec616bb00" Dec 10 16:03:55 crc kubenswrapper[5114]: I1210 16:03:55.379948 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4111f117cad630c4a38d678720efb491660e32643e3729e247f5fafec616bb00"} err="failed to get container status \"4111f117cad630c4a38d678720efb491660e32643e3729e247f5fafec616bb00\": rpc error: code = NotFound desc = could not find container \"4111f117cad630c4a38d678720efb491660e32643e3729e247f5fafec616bb00\": container with ID starting with 4111f117cad630c4a38d678720efb491660e32643e3729e247f5fafec616bb00 not found: ID does not exist" Dec 10 16:03:55 crc kubenswrapper[5114]: I1210 16:03:55.379969 5114 scope.go:117] "RemoveContainer" containerID="a88538d6dac9de3b4d9b53abfd023689524d241dd3fd8de72339d8afb04f9041" Dec 10 16:03:55 crc kubenswrapper[5114]: E1210 16:03:55.380203 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a88538d6dac9de3b4d9b53abfd023689524d241dd3fd8de72339d8afb04f9041\": container with ID starting with a88538d6dac9de3b4d9b53abfd023689524d241dd3fd8de72339d8afb04f9041 not found: ID does not exist" containerID="a88538d6dac9de3b4d9b53abfd023689524d241dd3fd8de72339d8afb04f9041" Dec 10 16:03:55 crc kubenswrapper[5114]: I1210 16:03:55.380218 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a88538d6dac9de3b4d9b53abfd023689524d241dd3fd8de72339d8afb04f9041"} err="failed to get container status \"a88538d6dac9de3b4d9b53abfd023689524d241dd3fd8de72339d8afb04f9041\": rpc error: code = NotFound desc = could not find container \"a88538d6dac9de3b4d9b53abfd023689524d241dd3fd8de72339d8afb04f9041\": container with ID starting with a88538d6dac9de3b4d9b53abfd023689524d241dd3fd8de72339d8afb04f9041 not found: ID does not exist" Dec 10 16:03:56 crc kubenswrapper[5114]: I1210 16:03:56.577945 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1115490a-10b8-4638-8eba-11920b06bdf0" path="/var/lib/kubelet/pods/1115490a-10b8-4638-8eba-11920b06bdf0/volumes" Dec 10 16:04:21 crc kubenswrapper[5114]: I1210 16:04:21.876632 5114 patch_prober.go:28] interesting pod/machine-config-daemon-pvhhc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:04:21 crc kubenswrapper[5114]: I1210 16:04:21.877085 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" podUID="b38ac556-07b2-4e25-9595-6adae4fcecb7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:04:31 crc kubenswrapper[5114]: I1210 16:04:31.998789 5114 ???:1] "http: TLS handshake error from 192.168.126.11:56578: no serving certificate available for the kubelet" Dec 10 16:04:51 crc kubenswrapper[5114]: I1210 16:04:51.876992 5114 patch_prober.go:28] interesting pod/machine-config-daemon-pvhhc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:04:51 crc kubenswrapper[5114]: I1210 16:04:51.878049 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" podUID="b38ac556-07b2-4e25-9595-6adae4fcecb7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:05:15 crc kubenswrapper[5114]: I1210 16:05:15.315946 5114 scope.go:117] "RemoveContainer" containerID="92e8a0942bdc7ddccdf297ee88fdcddec0f89e7db8e6b54983c5d2b40b9c3d4b" Dec 10 16:05:15 crc kubenswrapper[5114]: I1210 16:05:15.334249 5114 scope.go:117] "RemoveContainer" containerID="9ea8e0d2ac02442312066d859ee1d1bf49c1f02351e65b3415930311aac38ce5" Dec 10 16:05:15 crc kubenswrapper[5114]: I1210 16:05:15.352994 5114 scope.go:117] "RemoveContainer" containerID="4facd23bc0ea33809089e2b10df1a007436f0c7e4178bc16d7ac09f86b43f6d9" Dec 10 16:05:21 crc kubenswrapper[5114]: I1210 16:05:21.877028 5114 patch_prober.go:28] interesting pod/machine-config-daemon-pvhhc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:05:21 crc kubenswrapper[5114]: I1210 16:05:21.877653 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" podUID="b38ac556-07b2-4e25-9595-6adae4fcecb7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:05:21 crc kubenswrapper[5114]: I1210 16:05:21.877701 5114 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" Dec 10 16:05:21 crc kubenswrapper[5114]: I1210 16:05:21.878402 5114 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2eb85c8fa03347d17519dbe6f59409b495aeaa50fb25ad16d3ca4bfe4a68b80b"} pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 16:05:21 crc kubenswrapper[5114]: I1210 16:05:21.878472 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" podUID="b38ac556-07b2-4e25-9595-6adae4fcecb7" containerName="machine-config-daemon" containerID="cri-o://2eb85c8fa03347d17519dbe6f59409b495aeaa50fb25ad16d3ca4bfe4a68b80b" gracePeriod=600 Dec 10 16:05:22 crc kubenswrapper[5114]: I1210 16:05:22.002619 5114 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 16:05:22 crc kubenswrapper[5114]: I1210 16:05:22.866999 5114 generic.go:358] "Generic (PLEG): container finished" podID="b38ac556-07b2-4e25-9595-6adae4fcecb7" containerID="2eb85c8fa03347d17519dbe6f59409b495aeaa50fb25ad16d3ca4bfe4a68b80b" exitCode=0 Dec 10 16:05:22 crc kubenswrapper[5114]: I1210 16:05:22.867043 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" event={"ID":"b38ac556-07b2-4e25-9595-6adae4fcecb7","Type":"ContainerDied","Data":"2eb85c8fa03347d17519dbe6f59409b495aeaa50fb25ad16d3ca4bfe4a68b80b"} Dec 10 16:05:22 crc kubenswrapper[5114]: I1210 16:05:22.867431 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pvhhc" event={"ID":"b38ac556-07b2-4e25-9595-6adae4fcecb7","Type":"ContainerStarted","Data":"bc314639ad04078dfbba39e1db39ba1a1d9b3f6cfd69303307ccfef48811461b"} Dec 10 16:05:22 crc kubenswrapper[5114]: I1210 16:05:22.867466 5114 scope.go:117] "RemoveContainer" containerID="32e0cfb2943a8eeb1eb14112edafb4219bb4d51ca24ba6abc85b691ebf51d97a" Dec 10 16:05:53 crc kubenswrapper[5114]: I1210 16:05:53.945676 5114 ???:1] "http: TLS handshake error from 192.168.126.11:39618: no serving certificate available for the kubelet" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515116315150024443 0ustar coreroot‹íÁ  ÷Om7 €7šÞ'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015116315151017361 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015116312313016501 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015116312313015451 5ustar corecore